Nov 29 09:24:06 np0005539860 kernel: Linux version 5.14.0-642.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-68.el9) #1 SMP PREEMPT_DYNAMIC Thu Nov 20 14:15:03 UTC 2025
Nov 29 09:24:06 np0005539860 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Nov 29 09:24:06 np0005539860 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 09:24:06 np0005539860 kernel: BIOS-provided physical RAM map:
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Nov 29 09:24:06 np0005539860 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Nov 29 09:24:06 np0005539860 kernel: NX (Execute Disable) protection: active
Nov 29 09:24:06 np0005539860 kernel: APIC: Static calls initialized
Nov 29 09:24:06 np0005539860 kernel: SMBIOS 2.8 present.
Nov 29 09:24:06 np0005539860 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Nov 29 09:24:06 np0005539860 kernel: Hypervisor detected: KVM
Nov 29 09:24:06 np0005539860 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Nov 29 09:24:06 np0005539860 kernel: kvm-clock: using sched offset of 3377007289 cycles
Nov 29 09:24:06 np0005539860 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Nov 29 09:24:06 np0005539860 kernel: tsc: Detected 2799.998 MHz processor
Nov 29 09:24:06 np0005539860 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Nov 29 09:24:06 np0005539860 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Nov 29 09:24:06 np0005539860 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Nov 29 09:24:06 np0005539860 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Nov 29 09:24:06 np0005539860 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Nov 29 09:24:06 np0005539860 kernel: Using GB pages for direct mapping
Nov 29 09:24:06 np0005539860 kernel: RAMDISK: [mem 0x2d83a000-0x32c14fff]
Nov 29 09:24:06 np0005539860 kernel: ACPI: Early table checksum verification disabled
Nov 29 09:24:06 np0005539860 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Nov 29 09:24:06 np0005539860 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 09:24:06 np0005539860 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 09:24:06 np0005539860 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 09:24:06 np0005539860 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Nov 29 09:24:06 np0005539860 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 09:24:06 np0005539860 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Nov 29 09:24:06 np0005539860 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Nov 29 09:24:06 np0005539860 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Nov 29 09:24:06 np0005539860 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Nov 29 09:24:06 np0005539860 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Nov 29 09:24:06 np0005539860 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Nov 29 09:24:06 np0005539860 kernel: No NUMA configuration found
Nov 29 09:24:06 np0005539860 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Nov 29 09:24:06 np0005539860 kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Nov 29 09:24:06 np0005539860 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Nov 29 09:24:06 np0005539860 kernel: Zone ranges:
Nov 29 09:24:06 np0005539860 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Nov 29 09:24:06 np0005539860 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Nov 29 09:24:06 np0005539860 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 09:24:06 np0005539860 kernel:  Device   empty
Nov 29 09:24:06 np0005539860 kernel: Movable zone start for each node
Nov 29 09:24:06 np0005539860 kernel: Early memory node ranges
Nov 29 09:24:06 np0005539860 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Nov 29 09:24:06 np0005539860 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Nov 29 09:24:06 np0005539860 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Nov 29 09:24:06 np0005539860 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Nov 29 09:24:06 np0005539860 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Nov 29 09:24:06 np0005539860 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Nov 29 09:24:06 np0005539860 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Nov 29 09:24:06 np0005539860 kernel: ACPI: PM-Timer IO Port: 0x608
Nov 29 09:24:06 np0005539860 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Nov 29 09:24:06 np0005539860 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Nov 29 09:24:06 np0005539860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Nov 29 09:24:06 np0005539860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Nov 29 09:24:06 np0005539860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Nov 29 09:24:06 np0005539860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Nov 29 09:24:06 np0005539860 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Nov 29 09:24:06 np0005539860 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Nov 29 09:24:06 np0005539860 kernel: TSC deadline timer available
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Max. logical packages:   8
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Max. logical dies:       8
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Max. dies per package:   1
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Max. threads per core:   1
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Num. cores per package:     1
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Num. threads per package:   1
Nov 29 09:24:06 np0005539860 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Nov 29 09:24:06 np0005539860 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Nov 29 09:24:06 np0005539860 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Nov 29 09:24:06 np0005539860 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Nov 29 09:24:06 np0005539860 kernel: Booting paravirtualized kernel on KVM
Nov 29 09:24:06 np0005539860 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Nov 29 09:24:06 np0005539860 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Nov 29 09:24:06 np0005539860 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Nov 29 09:24:06 np0005539860 kernel: kvm-guest: PV spinlocks disabled, no host support
Nov 29 09:24:06 np0005539860 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 09:24:06 np0005539860 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64", will be passed to user space.
Nov 29 09:24:06 np0005539860 kernel: random: crng init done
Nov 29 09:24:06 np0005539860 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: Fallback order for Node 0: 0 
Nov 29 09:24:06 np0005539860 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Nov 29 09:24:06 np0005539860 kernel: Policy zone: Normal
Nov 29 09:24:06 np0005539860 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Nov 29 09:24:06 np0005539860 kernel: software IO TLB: area num 8.
Nov 29 09:24:06 np0005539860 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Nov 29 09:24:06 np0005539860 kernel: ftrace: allocating 49313 entries in 193 pages
Nov 29 09:24:06 np0005539860 kernel: ftrace: allocated 193 pages with 3 groups
Nov 29 09:24:06 np0005539860 kernel: Dynamic Preempt: voluntary
Nov 29 09:24:06 np0005539860 kernel: rcu: Preemptible hierarchical RCU implementation.
Nov 29 09:24:06 np0005539860 kernel: rcu: #011RCU event tracing is enabled.
Nov 29 09:24:06 np0005539860 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Nov 29 09:24:06 np0005539860 kernel: #011Trampoline variant of Tasks RCU enabled.
Nov 29 09:24:06 np0005539860 kernel: #011Rude variant of Tasks RCU enabled.
Nov 29 09:24:06 np0005539860 kernel: #011Tracing variant of Tasks RCU enabled.
Nov 29 09:24:06 np0005539860 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Nov 29 09:24:06 np0005539860 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Nov 29 09:24:06 np0005539860 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 09:24:06 np0005539860 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 09:24:06 np0005539860 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Nov 29 09:24:06 np0005539860 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Nov 29 09:24:06 np0005539860 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Nov 29 09:24:06 np0005539860 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Nov 29 09:24:06 np0005539860 kernel: Console: colour VGA+ 80x25
Nov 29 09:24:06 np0005539860 kernel: printk: console [ttyS0] enabled
Nov 29 09:24:06 np0005539860 kernel: ACPI: Core revision 20230331
Nov 29 09:24:06 np0005539860 kernel: APIC: Switch to symmetric I/O mode setup
Nov 29 09:24:06 np0005539860 kernel: x2apic enabled
Nov 29 09:24:06 np0005539860 kernel: APIC: Switched APIC routing to: physical x2apic
Nov 29 09:24:06 np0005539860 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Nov 29 09:24:06 np0005539860 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Nov 29 09:24:06 np0005539860 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Nov 29 09:24:06 np0005539860 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Nov 29 09:24:06 np0005539860 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Nov 29 09:24:06 np0005539860 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Nov 29 09:24:06 np0005539860 kernel: Spectre V2 : Mitigation: Retpolines
Nov 29 09:24:06 np0005539860 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Nov 29 09:24:06 np0005539860 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Nov 29 09:24:06 np0005539860 kernel: RETBleed: Mitigation: untrained return thunk
Nov 29 09:24:06 np0005539860 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Nov 29 09:24:06 np0005539860 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Nov 29 09:24:06 np0005539860 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Nov 29 09:24:06 np0005539860 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Nov 29 09:24:06 np0005539860 kernel: x86/bugs: return thunk changed
Nov 29 09:24:06 np0005539860 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Nov 29 09:24:06 np0005539860 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Nov 29 09:24:06 np0005539860 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Nov 29 09:24:06 np0005539860 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Nov 29 09:24:06 np0005539860 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Nov 29 09:24:06 np0005539860 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Nov 29 09:24:06 np0005539860 kernel: Freeing SMP alternatives memory: 40K
Nov 29 09:24:06 np0005539860 kernel: pid_max: default: 32768 minimum: 301
Nov 29 09:24:06 np0005539860 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Nov 29 09:24:06 np0005539860 kernel: landlock: Up and running.
Nov 29 09:24:06 np0005539860 kernel: Yama: becoming mindful.
Nov 29 09:24:06 np0005539860 kernel: SELinux:  Initializing.
Nov 29 09:24:06 np0005539860 kernel: LSM support for eBPF active
Nov 29 09:24:06 np0005539860 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Nov 29 09:24:06 np0005539860 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Nov 29 09:24:06 np0005539860 kernel: ... version:                0
Nov 29 09:24:06 np0005539860 kernel: ... bit width:              48
Nov 29 09:24:06 np0005539860 kernel: ... generic registers:      6
Nov 29 09:24:06 np0005539860 kernel: ... value mask:             0000ffffffffffff
Nov 29 09:24:06 np0005539860 kernel: ... max period:             00007fffffffffff
Nov 29 09:24:06 np0005539860 kernel: ... fixed-purpose events:   0
Nov 29 09:24:06 np0005539860 kernel: ... event mask:             000000000000003f
Nov 29 09:24:06 np0005539860 kernel: signal: max sigframe size: 1776
Nov 29 09:24:06 np0005539860 kernel: rcu: Hierarchical SRCU implementation.
Nov 29 09:24:06 np0005539860 kernel: rcu: #011Max phase no-delay instances is 400.
Nov 29 09:24:06 np0005539860 kernel: smp: Bringing up secondary CPUs ...
Nov 29 09:24:06 np0005539860 kernel: smpboot: x86: Booting SMP configuration:
Nov 29 09:24:06 np0005539860 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Nov 29 09:24:06 np0005539860 kernel: smp: Brought up 1 node, 8 CPUs
Nov 29 09:24:06 np0005539860 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Nov 29 09:24:06 np0005539860 kernel: node 0 deferred pages initialised in 9ms
Nov 29 09:24:06 np0005539860 kernel: Memory: 7765868K/8388068K available (16384K kernel code, 5787K rwdata, 13900K rodata, 4192K init, 7172K bss, 616276K reserved, 0K cma-reserved)
Nov 29 09:24:06 np0005539860 kernel: devtmpfs: initialized
Nov 29 09:24:06 np0005539860 kernel: x86/mm: Memory block size: 128MB
Nov 29 09:24:06 np0005539860 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Nov 29 09:24:06 np0005539860 kernel: futex hash table entries: 2048 (order: 5, 131072 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: pinctrl core: initialized pinctrl subsystem
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Nov 29 09:24:06 np0005539860 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Nov 29 09:24:06 np0005539860 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Nov 29 09:24:06 np0005539860 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Nov 29 09:24:06 np0005539860 kernel: audit: initializing netlink subsys (disabled)
Nov 29 09:24:06 np0005539860 kernel: audit: type=2000 audit(1764426244.594:1): state=initialized audit_enabled=0 res=1
Nov 29 09:24:06 np0005539860 kernel: thermal_sys: Registered thermal governor 'fair_share'
Nov 29 09:24:06 np0005539860 kernel: thermal_sys: Registered thermal governor 'step_wise'
Nov 29 09:24:06 np0005539860 kernel: thermal_sys: Registered thermal governor 'user_space'
Nov 29 09:24:06 np0005539860 kernel: cpuidle: using governor menu
Nov 29 09:24:06 np0005539860 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Nov 29 09:24:06 np0005539860 kernel: PCI: Using configuration type 1 for base access
Nov 29 09:24:06 np0005539860 kernel: PCI: Using configuration type 1 for extended access
Nov 29 09:24:06 np0005539860 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Nov 29 09:24:06 np0005539860 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Nov 29 09:24:06 np0005539860 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Nov 29 09:24:06 np0005539860 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Nov 29 09:24:06 np0005539860 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Nov 29 09:24:06 np0005539860 kernel: Demotion targets for Node 0: null
Nov 29 09:24:06 np0005539860 kernel: cryptd: max_cpu_qlen set to 1000
Nov 29 09:24:06 np0005539860 kernel: ACPI: Added _OSI(Module Device)
Nov 29 09:24:06 np0005539860 kernel: ACPI: Added _OSI(Processor Device)
Nov 29 09:24:06 np0005539860 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Nov 29 09:24:06 np0005539860 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Nov 29 09:24:06 np0005539860 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Nov 29 09:24:06 np0005539860 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Nov 29 09:24:06 np0005539860 kernel: ACPI: Interpreter enabled
Nov 29 09:24:06 np0005539860 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Nov 29 09:24:06 np0005539860 kernel: ACPI: Using IOAPIC for interrupt routing
Nov 29 09:24:06 np0005539860 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Nov 29 09:24:06 np0005539860 kernel: PCI: Using E820 reservations for host bridge windows
Nov 29 09:24:06 np0005539860 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Nov 29 09:24:06 np0005539860 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Nov 29 09:24:06 np0005539860 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [3] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [4] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [5] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [6] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [7] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [8] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [9] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [10] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [11] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [12] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [13] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [14] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [15] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [16] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [17] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [18] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [19] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [20] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [21] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [22] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [23] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [24] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [25] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [26] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [27] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [28] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [29] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [30] registered
Nov 29 09:24:06 np0005539860 kernel: acpiphp: Slot [31] registered
Nov 29 09:24:06 np0005539860 kernel: PCI host bridge to bus 0000:00
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Nov 29 09:24:06 np0005539860 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Nov 29 09:24:06 np0005539860 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Nov 29 09:24:06 np0005539860 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Nov 29 09:24:06 np0005539860 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Nov 29 09:24:06 np0005539860 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Nov 29 09:24:06 np0005539860 kernel: iommu: Default domain type: Translated
Nov 29 09:24:06 np0005539860 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Nov 29 09:24:06 np0005539860 kernel: SCSI subsystem initialized
Nov 29 09:24:06 np0005539860 kernel: ACPI: bus type USB registered
Nov 29 09:24:06 np0005539860 kernel: usbcore: registered new interface driver usbfs
Nov 29 09:24:06 np0005539860 kernel: usbcore: registered new interface driver hub
Nov 29 09:24:06 np0005539860 kernel: usbcore: registered new device driver usb
Nov 29 09:24:06 np0005539860 kernel: pps_core: LinuxPPS API ver. 1 registered
Nov 29 09:24:06 np0005539860 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Nov 29 09:24:06 np0005539860 kernel: PTP clock support registered
Nov 29 09:24:06 np0005539860 kernel: EDAC MC: Ver: 3.0.0
Nov 29 09:24:06 np0005539860 kernel: NetLabel: Initializing
Nov 29 09:24:06 np0005539860 kernel: NetLabel:  domain hash size = 128
Nov 29 09:24:06 np0005539860 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Nov 29 09:24:06 np0005539860 kernel: NetLabel:  unlabeled traffic allowed by default
Nov 29 09:24:06 np0005539860 kernel: PCI: Using ACPI for IRQ routing
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Nov 29 09:24:06 np0005539860 kernel: vgaarb: loaded
Nov 29 09:24:06 np0005539860 kernel: clocksource: Switched to clocksource kvm-clock
Nov 29 09:24:06 np0005539860 kernel: VFS: Disk quotas dquot_6.6.0
Nov 29 09:24:06 np0005539860 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Nov 29 09:24:06 np0005539860 kernel: pnp: PnP ACPI init
Nov 29 09:24:06 np0005539860 kernel: pnp: PnP ACPI: found 5 devices
Nov 29 09:24:06 np0005539860 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_INET protocol family
Nov 29 09:24:06 np0005539860 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Nov 29 09:24:06 np0005539860 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_XDP protocol family
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Nov 29 09:24:06 np0005539860 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Nov 29 09:24:06 np0005539860 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Nov 29 09:24:06 np0005539860 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 97965 usecs
Nov 29 09:24:06 np0005539860 kernel: PCI: CLS 0 bytes, default 64
Nov 29 09:24:06 np0005539860 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Nov 29 09:24:06 np0005539860 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Nov 29 09:24:06 np0005539860 kernel: ACPI: bus type thunderbolt registered
Nov 29 09:24:06 np0005539860 kernel: Trying to unpack rootfs image as initramfs...
Nov 29 09:24:06 np0005539860 kernel: Initialise system trusted keyrings
Nov 29 09:24:06 np0005539860 kernel: Key type blacklist registered
Nov 29 09:24:06 np0005539860 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Nov 29 09:24:06 np0005539860 kernel: zbud: loaded
Nov 29 09:24:06 np0005539860 kernel: integrity: Platform Keyring initialized
Nov 29 09:24:06 np0005539860 kernel: integrity: Machine keyring initialized
Nov 29 09:24:06 np0005539860 kernel: Freeing initrd memory: 85868K
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_ALG protocol family
Nov 29 09:24:06 np0005539860 kernel: xor: automatically using best checksumming function   avx       
Nov 29 09:24:06 np0005539860 kernel: Key type asymmetric registered
Nov 29 09:24:06 np0005539860 kernel: Asymmetric key parser 'x509' registered
Nov 29 09:24:06 np0005539860 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Nov 29 09:24:06 np0005539860 kernel: io scheduler mq-deadline registered
Nov 29 09:24:06 np0005539860 kernel: io scheduler kyber registered
Nov 29 09:24:06 np0005539860 kernel: io scheduler bfq registered
Nov 29 09:24:06 np0005539860 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Nov 29 09:24:06 np0005539860 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Nov 29 09:24:06 np0005539860 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Nov 29 09:24:06 np0005539860 kernel: ACPI: button: Power Button [PWRF]
Nov 29 09:24:06 np0005539860 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Nov 29 09:24:06 np0005539860 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Nov 29 09:24:06 np0005539860 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Nov 29 09:24:06 np0005539860 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Nov 29 09:24:06 np0005539860 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Nov 29 09:24:06 np0005539860 kernel: Non-volatile memory driver v1.3
Nov 29 09:24:06 np0005539860 kernel: rdac: device handler registered
Nov 29 09:24:06 np0005539860 kernel: hp_sw: device handler registered
Nov 29 09:24:06 np0005539860 kernel: emc: device handler registered
Nov 29 09:24:06 np0005539860 kernel: alua: device handler registered
Nov 29 09:24:06 np0005539860 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Nov 29 09:24:06 np0005539860 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Nov 29 09:24:06 np0005539860 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Nov 29 09:24:06 np0005539860 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Nov 29 09:24:06 np0005539860 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Nov 29 09:24:06 np0005539860 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Nov 29 09:24:06 np0005539860 kernel: usb usb1: Product: UHCI Host Controller
Nov 29 09:24:06 np0005539860 kernel: usb usb1: Manufacturer: Linux 5.14.0-642.el9.x86_64 uhci_hcd
Nov 29 09:24:06 np0005539860 kernel: usb usb1: SerialNumber: 0000:00:01.2
Nov 29 09:24:06 np0005539860 kernel: hub 1-0:1.0: USB hub found
Nov 29 09:24:06 np0005539860 kernel: hub 1-0:1.0: 2 ports detected
Nov 29 09:24:06 np0005539860 kernel: usbcore: registered new interface driver usbserial_generic
Nov 29 09:24:06 np0005539860 kernel: usbserial: USB Serial support registered for generic
Nov 29 09:24:06 np0005539860 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Nov 29 09:24:06 np0005539860 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Nov 29 09:24:06 np0005539860 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Nov 29 09:24:06 np0005539860 kernel: mousedev: PS/2 mouse device common for all mice
Nov 29 09:24:06 np0005539860 kernel: rtc_cmos 00:04: RTC can wake from S4
Nov 29 09:24:06 np0005539860 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Nov 29 09:24:06 np0005539860 kernel: rtc_cmos 00:04: registered as rtc0
Nov 29 09:24:06 np0005539860 kernel: rtc_cmos 00:04: setting system clock to 2025-11-29T14:24:05 UTC (1764426245)
Nov 29 09:24:06 np0005539860 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Nov 29 09:24:06 np0005539860 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Nov 29 09:24:06 np0005539860 kernel: hid: raw HID events driver (C) Jiri Kosina
Nov 29 09:24:06 np0005539860 kernel: usbcore: registered new interface driver usbhid
Nov 29 09:24:06 np0005539860 kernel: usbhid: USB HID core driver
Nov 29 09:24:06 np0005539860 kernel: drop_monitor: Initializing network drop monitor service
Nov 29 09:24:06 np0005539860 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Nov 29 09:24:06 np0005539860 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Nov 29 09:24:06 np0005539860 kernel: Initializing XFRM netlink socket
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_INET6 protocol family
Nov 29 09:24:06 np0005539860 kernel: Segment Routing with IPv6
Nov 29 09:24:06 np0005539860 kernel: NET: Registered PF_PACKET protocol family
Nov 29 09:24:06 np0005539860 kernel: mpls_gso: MPLS GSO support
Nov 29 09:24:06 np0005539860 kernel: IPI shorthand broadcast: enabled
Nov 29 09:24:06 np0005539860 kernel: AVX2 version of gcm_enc/dec engaged.
Nov 29 09:24:06 np0005539860 kernel: AES CTR mode by8 optimization enabled
Nov 29 09:24:06 np0005539860 kernel: sched_clock: Marking stable (1266008718, 146509435)->(1536949608, -124431455)
Nov 29 09:24:06 np0005539860 kernel: registered taskstats version 1
Nov 29 09:24:06 np0005539860 kernel: Loading compiled-in X.509 certificates
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Nov 29 09:24:06 np0005539860 kernel: Demotion targets for Node 0: null
Nov 29 09:24:06 np0005539860 kernel: page_owner is disabled
Nov 29 09:24:06 np0005539860 kernel: Key type .fscrypt registered
Nov 29 09:24:06 np0005539860 kernel: Key type fscrypt-provisioning registered
Nov 29 09:24:06 np0005539860 kernel: Key type big_key registered
Nov 29 09:24:06 np0005539860 kernel: Key type encrypted registered
Nov 29 09:24:06 np0005539860 kernel: ima: No TPM chip found, activating TPM-bypass!
Nov 29 09:24:06 np0005539860 kernel: Loading compiled-in module X.509 certificates
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 8ec4bd273f582f9a9b9a494ae677ca1f1488f19e'
Nov 29 09:24:06 np0005539860 kernel: ima: Allocated hash algorithm: sha256
Nov 29 09:24:06 np0005539860 kernel: ima: No architecture policies found
Nov 29 09:24:06 np0005539860 kernel: evm: Initialising EVM extended attributes:
Nov 29 09:24:06 np0005539860 kernel: evm: security.selinux
Nov 29 09:24:06 np0005539860 kernel: evm: security.SMACK64 (disabled)
Nov 29 09:24:06 np0005539860 kernel: evm: security.SMACK64EXEC (disabled)
Nov 29 09:24:06 np0005539860 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Nov 29 09:24:06 np0005539860 kernel: evm: security.SMACK64MMAP (disabled)
Nov 29 09:24:06 np0005539860 kernel: evm: security.apparmor (disabled)
Nov 29 09:24:06 np0005539860 kernel: evm: security.ima
Nov 29 09:24:06 np0005539860 kernel: evm: security.capability
Nov 29 09:24:06 np0005539860 kernel: evm: HMAC attrs: 0x1
Nov 29 09:24:06 np0005539860 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Nov 29 09:24:06 np0005539860 kernel: Running certificate verification RSA selftest
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Nov 29 09:24:06 np0005539860 kernel: Running certificate verification ECDSA selftest
Nov 29 09:24:06 np0005539860 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Nov 29 09:24:06 np0005539860 kernel: clk: Disabling unused clocks
Nov 29 09:24:06 np0005539860 kernel: Freeing unused decrypted memory: 2028K
Nov 29 09:24:06 np0005539860 kernel: Freeing unused kernel image (initmem) memory: 4192K
Nov 29 09:24:06 np0005539860 kernel: Write protecting the kernel read-only data: 30720k
Nov 29 09:24:06 np0005539860 kernel: Freeing unused kernel image (rodata/data gap) memory: 436K
Nov 29 09:24:06 np0005539860 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Nov 29 09:24:06 np0005539860 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Nov 29 09:24:06 np0005539860 kernel: usb 1-1: Product: QEMU USB Tablet
Nov 29 09:24:06 np0005539860 kernel: usb 1-1: Manufacturer: QEMU
Nov 29 09:24:06 np0005539860 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Nov 29 09:24:06 np0005539860 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Nov 29 09:24:06 np0005539860 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Nov 29 09:24:06 np0005539860 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Nov 29 09:24:06 np0005539860 kernel: Run /init as init process
Nov 29 09:24:06 np0005539860 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 09:24:06 np0005539860 systemd: Detected virtualization kvm.
Nov 29 09:24:06 np0005539860 systemd: Detected architecture x86-64.
Nov 29 09:24:06 np0005539860 systemd: Running in initrd.
Nov 29 09:24:06 np0005539860 systemd: No hostname configured, using default hostname.
Nov 29 09:24:06 np0005539860 systemd: Hostname set to <localhost>.
Nov 29 09:24:06 np0005539860 systemd: Initializing machine ID from VM UUID.
Nov 29 09:24:06 np0005539860 systemd: Queued start job for default target Initrd Default Target.
Nov 29 09:24:06 np0005539860 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 09:24:06 np0005539860 systemd: Reached target Local Encrypted Volumes.
Nov 29 09:24:06 np0005539860 systemd: Reached target Initrd /usr File System.
Nov 29 09:24:06 np0005539860 systemd: Reached target Local File Systems.
Nov 29 09:24:06 np0005539860 systemd: Reached target Path Units.
Nov 29 09:24:06 np0005539860 systemd: Reached target Slice Units.
Nov 29 09:24:06 np0005539860 systemd: Reached target Swaps.
Nov 29 09:24:06 np0005539860 systemd: Reached target Timer Units.
Nov 29 09:24:06 np0005539860 systemd: Listening on D-Bus System Message Bus Socket.
Nov 29 09:24:06 np0005539860 systemd: Listening on Journal Socket (/dev/log).
Nov 29 09:24:06 np0005539860 systemd: Listening on Journal Socket.
Nov 29 09:24:06 np0005539860 systemd: Listening on udev Control Socket.
Nov 29 09:24:06 np0005539860 systemd: Listening on udev Kernel Socket.
Nov 29 09:24:06 np0005539860 systemd: Reached target Socket Units.
Nov 29 09:24:06 np0005539860 systemd: Starting Create List of Static Device Nodes...
Nov 29 09:24:06 np0005539860 systemd: Starting Journal Service...
Nov 29 09:24:06 np0005539860 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 09:24:06 np0005539860 systemd: Starting Apply Kernel Variables...
Nov 29 09:24:06 np0005539860 systemd: Starting Create System Users...
Nov 29 09:24:06 np0005539860 systemd: Starting Setup Virtual Console...
Nov 29 09:24:06 np0005539860 systemd: Finished Create List of Static Device Nodes.
Nov 29 09:24:06 np0005539860 systemd: Finished Apply Kernel Variables.
Nov 29 09:24:06 np0005539860 systemd: Finished Create System Users.
Nov 29 09:24:06 np0005539860 systemd-journald[307]: Journal started
Nov 29 09:24:06 np0005539860 systemd-journald[307]: Runtime Journal (/run/log/journal/0615934fa8e34c06805342a9c2c49d13) is 8.0M, max 153.6M, 145.6M free.
Nov 29 09:24:06 np0005539860 systemd-sysusers[311]: Creating group 'users' with GID 100.
Nov 29 09:24:06 np0005539860 systemd-sysusers[311]: Creating group 'dbus' with GID 81.
Nov 29 09:24:06 np0005539860 systemd-sysusers[311]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Nov 29 09:24:06 np0005539860 systemd: Started Journal Service.
Nov 29 09:24:06 np0005539860 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 09:24:06 np0005539860 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 09:24:06 np0005539860 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 09:24:06 np0005539860 systemd[1]: Finished Setup Virtual Console.
Nov 29 09:24:06 np0005539860 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 09:24:06 np0005539860 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Nov 29 09:24:06 np0005539860 systemd[1]: Starting dracut cmdline hook...
Nov 29 09:24:06 np0005539860 dracut-cmdline[325]: dracut-9 dracut-057-102.git20250818.el9
Nov 29 09:24:06 np0005539860 dracut-cmdline[325]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-642.el9.x86_64 root=UUID=b277050f-8ace-464d-abb6-4c46d4c45253 ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Nov 29 09:24:06 np0005539860 systemd[1]: Finished dracut cmdline hook.
Nov 29 09:24:06 np0005539860 systemd[1]: Starting dracut pre-udev hook...
Nov 29 09:24:06 np0005539860 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Nov 29 09:24:06 np0005539860 kernel: device-mapper: uevent: version 1.0.3
Nov 29 09:24:06 np0005539860 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Nov 29 09:24:06 np0005539860 kernel: RPC: Registered named UNIX socket transport module.
Nov 29 09:24:06 np0005539860 kernel: RPC: Registered udp transport module.
Nov 29 09:24:06 np0005539860 kernel: RPC: Registered tcp transport module.
Nov 29 09:24:06 np0005539860 kernel: RPC: Registered tcp-with-tls transport module.
Nov 29 09:24:06 np0005539860 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Nov 29 09:24:06 np0005539860 rpc.statd[441]: Version 2.5.4 starting
Nov 29 09:24:06 np0005539860 rpc.statd[441]: Initializing NSM state
Nov 29 09:24:07 np0005539860 rpc.idmapd[446]: Setting log level to 0
Nov 29 09:24:07 np0005539860 systemd[1]: Finished dracut pre-udev hook.
Nov 29 09:24:07 np0005539860 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 09:24:07 np0005539860 systemd-udevd[459]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 09:24:07 np0005539860 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 09:24:07 np0005539860 systemd[1]: Starting dracut pre-trigger hook...
Nov 29 09:24:07 np0005539860 systemd[1]: Finished dracut pre-trigger hook.
Nov 29 09:24:07 np0005539860 systemd[1]: Starting Coldplug All udev Devices...
Nov 29 09:24:07 np0005539860 systemd[1]: Created slice Slice /system/modprobe.
Nov 29 09:24:07 np0005539860 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 09:24:07 np0005539860 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 09:24:07 np0005539860 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 09:24:07 np0005539860 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 09:24:07 np0005539860 systemd[1]: Mounting Kernel Configuration File System...
Nov 29 09:24:07 np0005539860 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target Network.
Nov 29 09:24:07 np0005539860 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Nov 29 09:24:07 np0005539860 systemd[1]: Starting dracut initqueue hook...
Nov 29 09:24:07 np0005539860 systemd[1]: Mounted Kernel Configuration File System.
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target System Initialization.
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target Basic System.
Nov 29 09:24:07 np0005539860 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Nov 29 09:24:07 np0005539860 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Nov 29 09:24:07 np0005539860 kernel: vda: vda1
Nov 29 09:24:07 np0005539860 kernel: scsi host0: ata_piix
Nov 29 09:24:07 np0005539860 kernel: scsi host1: ata_piix
Nov 29 09:24:07 np0005539860 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Nov 29 09:24:07 np0005539860 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Nov 29 09:24:07 np0005539860 systemd[1]: Found device /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target Initrd Root Device.
Nov 29 09:24:07 np0005539860 kernel: ata1: found unknown device (class 0)
Nov 29 09:24:07 np0005539860 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Nov 29 09:24:07 np0005539860 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Nov 29 09:24:07 np0005539860 systemd-udevd[464]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:24:07 np0005539860 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Nov 29 09:24:07 np0005539860 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Nov 29 09:24:07 np0005539860 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Nov 29 09:24:07 np0005539860 systemd[1]: Finished dracut initqueue hook.
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target Remote Encrypted Volumes.
Nov 29 09:24:07 np0005539860 systemd[1]: Reached target Remote File Systems.
Nov 29 09:24:07 np0005539860 systemd[1]: Starting dracut pre-mount hook...
Nov 29 09:24:07 np0005539860 systemd[1]: Finished dracut pre-mount hook.
Nov 29 09:24:07 np0005539860 systemd[1]: Starting File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253...
Nov 29 09:24:07 np0005539860 systemd-fsck[555]: /usr/sbin/fsck.xfs: XFS file system.
Nov 29 09:24:07 np0005539860 systemd[1]: Finished File System Check on /dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253.
Nov 29 09:24:07 np0005539860 systemd[1]: Mounting /sysroot...
Nov 29 09:24:08 np0005539860 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Nov 29 09:24:08 np0005539860 kernel: XFS (vda1): Mounting V5 Filesystem b277050f-8ace-464d-abb6-4c46d4c45253
Nov 29 09:24:08 np0005539860 kernel: XFS (vda1): Ending clean mount
Nov 29 09:24:08 np0005539860 systemd[1]: Mounted /sysroot.
Nov 29 09:24:08 np0005539860 systemd[1]: Reached target Initrd Root File System.
Nov 29 09:24:08 np0005539860 systemd[1]: Starting Mountpoints Configured in the Real Root...
Nov 29 09:24:08 np0005539860 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Finished Mountpoints Configured in the Real Root.
Nov 29 09:24:08 np0005539860 systemd[1]: Reached target Initrd File Systems.
Nov 29 09:24:08 np0005539860 systemd[1]: Reached target Initrd Default Target.
Nov 29 09:24:08 np0005539860 systemd[1]: Starting dracut mount hook...
Nov 29 09:24:08 np0005539860 systemd[1]: Finished dracut mount hook.
Nov 29 09:24:08 np0005539860 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Nov 29 09:24:08 np0005539860 rpc.idmapd[446]: exiting on signal 15
Nov 29 09:24:08 np0005539860 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Nov 29 09:24:08 np0005539860 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Network.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Remote Encrypted Volumes.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Timer Units.
Nov 29 09:24:08 np0005539860 systemd[1]: dbus.socket: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Closed D-Bus System Message Bus Socket.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Initrd Default Target.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Basic System.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Initrd Root Device.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Initrd /usr File System.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Path Units.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Remote File Systems.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Preparation for Remote File Systems.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Slice Units.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Socket Units.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target System Initialization.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Local File Systems.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Swaps.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-mount.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut mount hook.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut pre-mount hook.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped target Local Encrypted Volumes.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut initqueue hook.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Create Volatile Files and Directories.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Coldplug All udev Devices.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut pre-trigger hook.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Setup Virtual Console.
Nov 29 09:24:08 np0005539860 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-udevd.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-udevd.service: Consumed 1.101s CPU time.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Closed udev Control Socket.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Closed udev Kernel Socket.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut pre-udev hook.
Nov 29 09:24:08 np0005539860 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped dracut cmdline hook.
Nov 29 09:24:08 np0005539860 systemd[1]: Starting Cleanup udev Database...
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Create Static Device Nodes in /dev.
Nov 29 09:24:08 np0005539860 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Create List of Static Device Nodes.
Nov 29 09:24:08 np0005539860 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Stopped Create System Users.
Nov 29 09:24:08 np0005539860 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Nov 29 09:24:08 np0005539860 systemd[1]: Finished Cleanup udev Database.
Nov 29 09:24:08 np0005539860 systemd[1]: Reached target Switch Root.
Nov 29 09:24:08 np0005539860 systemd[1]: Starting Switch Root...
Nov 29 09:24:08 np0005539860 systemd[1]: Switching root.
Nov 29 09:24:08 np0005539860 systemd-journald[307]: Journal stopped
Nov 29 09:24:09 np0005539860 systemd-journald: Received SIGTERM from PID 1 (systemd).
Nov 29 09:24:09 np0005539860 kernel: audit: type=1404 audit(1764426248.796:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 09:24:09 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 09:24:09 np0005539860 kernel: audit: type=1403 audit(1764426248.968:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Nov 29 09:24:09 np0005539860 systemd: Successfully loaded SELinux policy in 177.878ms.
Nov 29 09:24:09 np0005539860 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.644ms.
Nov 29 09:24:09 np0005539860 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Nov 29 09:24:09 np0005539860 systemd: Detected virtualization kvm.
Nov 29 09:24:09 np0005539860 systemd: Detected architecture x86-64.
Nov 29 09:24:09 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:24:09 np0005539860 systemd: initrd-switch-root.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd: Stopped Switch Root.
Nov 29 09:24:09 np0005539860 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Nov 29 09:24:09 np0005539860 systemd: Created slice Slice /system/getty.
Nov 29 09:24:09 np0005539860 systemd: Created slice Slice /system/serial-getty.
Nov 29 09:24:09 np0005539860 systemd: Created slice Slice /system/sshd-keygen.
Nov 29 09:24:09 np0005539860 systemd: Created slice User and Session Slice.
Nov 29 09:24:09 np0005539860 systemd: Started Dispatch Password Requests to Console Directory Watch.
Nov 29 09:24:09 np0005539860 systemd: Started Forward Password Requests to Wall Directory Watch.
Nov 29 09:24:09 np0005539860 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Nov 29 09:24:09 np0005539860 systemd: Reached target Local Encrypted Volumes.
Nov 29 09:24:09 np0005539860 systemd: Stopped target Switch Root.
Nov 29 09:24:09 np0005539860 systemd: Stopped target Initrd File Systems.
Nov 29 09:24:09 np0005539860 systemd: Stopped target Initrd Root File System.
Nov 29 09:24:09 np0005539860 systemd: Reached target Local Integrity Protected Volumes.
Nov 29 09:24:09 np0005539860 systemd: Reached target Path Units.
Nov 29 09:24:09 np0005539860 systemd: Reached target rpc_pipefs.target.
Nov 29 09:24:09 np0005539860 systemd: Reached target Slice Units.
Nov 29 09:24:09 np0005539860 systemd: Reached target Swaps.
Nov 29 09:24:09 np0005539860 systemd: Reached target Local Verity Protected Volumes.
Nov 29 09:24:09 np0005539860 systemd: Listening on RPCbind Server Activation Socket.
Nov 29 09:24:09 np0005539860 systemd: Reached target RPC Port Mapper.
Nov 29 09:24:09 np0005539860 systemd: Listening on Process Core Dump Socket.
Nov 29 09:24:09 np0005539860 systemd: Listening on initctl Compatibility Named Pipe.
Nov 29 09:24:09 np0005539860 systemd: Listening on udev Control Socket.
Nov 29 09:24:09 np0005539860 systemd: Listening on udev Kernel Socket.
Nov 29 09:24:09 np0005539860 systemd: Mounting Huge Pages File System...
Nov 29 09:24:09 np0005539860 systemd: Mounting POSIX Message Queue File System...
Nov 29 09:24:09 np0005539860 systemd: Mounting Kernel Debug File System...
Nov 29 09:24:09 np0005539860 systemd: Mounting Kernel Trace File System...
Nov 29 09:24:09 np0005539860 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 09:24:09 np0005539860 systemd: Starting Create List of Static Device Nodes...
Nov 29 09:24:09 np0005539860 systemd: Starting Load Kernel Module configfs...
Nov 29 09:24:09 np0005539860 systemd: Starting Load Kernel Module drm...
Nov 29 09:24:09 np0005539860 systemd: Starting Load Kernel Module efi_pstore...
Nov 29 09:24:09 np0005539860 systemd: Starting Load Kernel Module fuse...
Nov 29 09:24:09 np0005539860 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Nov 29 09:24:09 np0005539860 systemd: systemd-fsck-root.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd: Stopped File System Check on Root Device.
Nov 29 09:24:09 np0005539860 systemd: Stopped Journal Service.
Nov 29 09:24:09 np0005539860 systemd: Starting Journal Service...
Nov 29 09:24:09 np0005539860 kernel: ACPI: bus type drm_connector registered
Nov 29 09:24:09 np0005539860 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Nov 29 09:24:09 np0005539860 systemd: Starting Generate network units from Kernel command line...
Nov 29 09:24:09 np0005539860 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 09:24:09 np0005539860 systemd: Starting Remount Root and Kernel File Systems...
Nov 29 09:24:09 np0005539860 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Nov 29 09:24:09 np0005539860 systemd: Starting Apply Kernel Variables...
Nov 29 09:24:09 np0005539860 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Nov 29 09:24:09 np0005539860 kernel: fuse: init (API version 7.37)
Nov 29 09:24:09 np0005539860 systemd: Starting Coldplug All udev Devices...
Nov 29 09:24:09 np0005539860 systemd: Mounted Huge Pages File System.
Nov 29 09:24:09 np0005539860 systemd: Mounted POSIX Message Queue File System.
Nov 29 09:24:09 np0005539860 systemd: Mounted Kernel Debug File System.
Nov 29 09:24:09 np0005539860 systemd: Mounted Kernel Trace File System.
Nov 29 09:24:09 np0005539860 systemd-journald[678]: Journal started
Nov 29 09:24:09 np0005539860 systemd-journald[678]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 09:24:09 np0005539860 systemd[1]: Queued start job for default target Multi-User System.
Nov 29 09:24:09 np0005539860 systemd[1]: systemd-journald.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd: Started Journal Service.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Create List of Static Device Nodes.
Nov 29 09:24:09 np0005539860 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 09:24:09 np0005539860 systemd[1]: modprobe@drm.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Load Kernel Module drm.
Nov 29 09:24:09 np0005539860 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Load Kernel Module efi_pstore.
Nov 29 09:24:09 np0005539860 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Load Kernel Module fuse.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Generate network units from Kernel command line.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Remount Root and Kernel File Systems.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Apply Kernel Variables.
Nov 29 09:24:09 np0005539860 systemd[1]: Mounting FUSE Control File System...
Nov 29 09:24:09 np0005539860 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Rebuild Hardware Database...
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Flush Journal to Persistent Storage...
Nov 29 09:24:09 np0005539860 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Load/Save OS Random Seed...
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Create System Users...
Nov 29 09:24:09 np0005539860 systemd[1]: Mounted FUSE Control File System.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Load/Save OS Random Seed.
Nov 29 09:24:09 np0005539860 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Nov 29 09:24:09 np0005539860 systemd-journald[678]: Runtime Journal (/run/log/journal/1f988c78c563e12389ab342aced42dbb) is 8.0M, max 153.6M, 145.6M free.
Nov 29 09:24:09 np0005539860 systemd-journald[678]: Received client request to flush runtime journal.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Flush Journal to Persistent Storage.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Coldplug All udev Devices.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Create System Users.
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Create Static Device Nodes in /dev...
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Create Static Device Nodes in /dev.
Nov 29 09:24:09 np0005539860 systemd[1]: Reached target Preparation for Local File Systems.
Nov 29 09:24:09 np0005539860 systemd[1]: Reached target Local File Systems.
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Nov 29 09:24:09 np0005539860 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Nov 29 09:24:09 np0005539860 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Nov 29 09:24:09 np0005539860 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Automatic Boot Loader Update...
Nov 29 09:24:09 np0005539860 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Create Volatile Files and Directories...
Nov 29 09:24:09 np0005539860 bootctl[696]: Couldn't find EFI system partition, skipping.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Automatic Boot Loader Update.
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Create Volatile Files and Directories.
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Security Auditing Service...
Nov 29 09:24:09 np0005539860 systemd[1]: Starting RPC Bind...
Nov 29 09:24:09 np0005539860 systemd[1]: Starting Rebuild Journal Catalog...
Nov 29 09:24:09 np0005539860 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Nov 29 09:24:10 np0005539860 auditd[702]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Nov 29 09:24:10 np0005539860 auditd[702]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Nov 29 09:24:10 np0005539860 systemd[1]: Started RPC Bind.
Nov 29 09:24:10 np0005539860 systemd[1]: Finished Rebuild Journal Catalog.
Nov 29 09:24:10 np0005539860 augenrules[707]: /sbin/augenrules: No change
Nov 29 09:24:10 np0005539860 augenrules[722]: No rules
Nov 29 09:24:10 np0005539860 augenrules[722]: enabled 1
Nov 29 09:24:10 np0005539860 augenrules[722]: failure 1
Nov 29 09:24:10 np0005539860 augenrules[722]: pid 702
Nov 29 09:24:10 np0005539860 augenrules[722]: rate_limit 0
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_limit 8192
Nov 29 09:24:10 np0005539860 augenrules[722]: lost 0
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog 3
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_wait_time 60000
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_wait_time_actual 0
Nov 29 09:24:10 np0005539860 augenrules[722]: enabled 1
Nov 29 09:24:10 np0005539860 augenrules[722]: failure 1
Nov 29 09:24:10 np0005539860 augenrules[722]: pid 702
Nov 29 09:24:10 np0005539860 augenrules[722]: rate_limit 0
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_limit 8192
Nov 29 09:24:10 np0005539860 augenrules[722]: lost 0
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog 4
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_wait_time 60000
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_wait_time_actual 0
Nov 29 09:24:10 np0005539860 augenrules[722]: enabled 1
Nov 29 09:24:10 np0005539860 augenrules[722]: failure 1
Nov 29 09:24:10 np0005539860 augenrules[722]: pid 702
Nov 29 09:24:10 np0005539860 augenrules[722]: rate_limit 0
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_limit 8192
Nov 29 09:24:10 np0005539860 augenrules[722]: lost 0
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog 4
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_wait_time 60000
Nov 29 09:24:10 np0005539860 augenrules[722]: backlog_wait_time_actual 0
Nov 29 09:24:10 np0005539860 systemd[1]: Started Security Auditing Service.
Nov 29 09:24:10 np0005539860 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Nov 29 09:24:10 np0005539860 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Nov 29 09:24:10 np0005539860 systemd[1]: Finished Rebuild Hardware Database.
Nov 29 09:24:10 np0005539860 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Nov 29 09:24:10 np0005539860 systemd[1]: Starting Update is Completed...
Nov 29 09:24:10 np0005539860 systemd[1]: Finished Update is Completed.
Nov 29 09:24:10 np0005539860 systemd-udevd[730]: Using default interface naming scheme 'rhel-9.0'.
Nov 29 09:24:10 np0005539860 systemd[1]: Started Rule-based Manager for Device Events and Files.
Nov 29 09:24:10 np0005539860 systemd[1]: Reached target System Initialization.
Nov 29 09:24:10 np0005539860 systemd[1]: Started dnf makecache --timer.
Nov 29 09:24:10 np0005539860 systemd[1]: Started Daily rotation of log files.
Nov 29 09:24:10 np0005539860 systemd[1]: Started Daily Cleanup of Temporary Directories.
Nov 29 09:24:10 np0005539860 systemd[1]: Reached target Timer Units.
Nov 29 09:24:10 np0005539860 systemd[1]: Listening on D-Bus System Message Bus Socket.
Nov 29 09:24:10 np0005539860 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Nov 29 09:24:10 np0005539860 systemd[1]: Reached target Socket Units.
Nov 29 09:24:10 np0005539860 systemd[1]: Starting D-Bus System Message Bus...
Nov 29 09:24:10 np0005539860 systemd-udevd[733]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:24:10 np0005539860 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 09:24:10 np0005539860 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Nov 29 09:24:10 np0005539860 systemd[1]: Starting Load Kernel Module configfs...
Nov 29 09:24:10 np0005539860 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Nov 29 09:24:10 np0005539860 systemd[1]: Finished Load Kernel Module configfs.
Nov 29 09:24:10 np0005539860 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Nov 29 09:24:10 np0005539860 systemd[1]: Started D-Bus System Message Bus.
Nov 29 09:24:10 np0005539860 systemd[1]: Reached target Basic System.
Nov 29 09:24:10 np0005539860 dbus-broker-lau[770]: Ready
Nov 29 09:24:10 np0005539860 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Nov 29 09:24:10 np0005539860 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Nov 29 09:24:10 np0005539860 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Nov 29 09:24:10 np0005539860 systemd[1]: Starting NTP client/server...
Nov 29 09:24:10 np0005539860 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Nov 29 09:24:10 np0005539860 systemd[1]: Starting Restore /run/initramfs on shutdown...
Nov 29 09:24:10 np0005539860 systemd[1]: Starting IPv4 firewall with iptables...
Nov 29 09:24:10 np0005539860 systemd[1]: Started irqbalance daemon.
Nov 29 09:24:10 np0005539860 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Nov 29 09:24:10 np0005539860 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 09:24:10 np0005539860 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 09:24:10 np0005539860 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 09:24:10 np0005539860 systemd[1]: Reached target sshd-keygen.target.
Nov 29 09:24:10 np0005539860 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Nov 29 09:24:10 np0005539860 systemd[1]: Reached target User and Group Name Lookups.
Nov 29 09:24:10 np0005539860 systemd[1]: Starting User Login Management...
Nov 29 09:24:10 np0005539860 systemd[1]: Finished Restore /run/initramfs on shutdown.
Nov 29 09:24:10 np0005539860 chronyd[802]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 09:24:10 np0005539860 chronyd[802]: Loaded 0 symmetric keys
Nov 29 09:24:10 np0005539860 chronyd[802]: Using right/UTC timezone to obtain leap second data
Nov 29 09:24:10 np0005539860 chronyd[802]: Loaded seccomp filter (level 2)
Nov 29 09:24:10 np0005539860 systemd[1]: Started NTP client/server.
Nov 29 09:24:10 np0005539860 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Nov 29 09:24:10 np0005539860 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Nov 29 09:24:10 np0005539860 systemd-logind[794]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 09:24:10 np0005539860 systemd-logind[794]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 09:24:10 np0005539860 systemd-logind[794]: New seat seat0.
Nov 29 09:24:10 np0005539860 systemd[1]: Started User Login Management.
Nov 29 09:24:10 np0005539860 kernel: kvm_amd: TSC scaling supported
Nov 29 09:24:10 np0005539860 kernel: kvm_amd: Nested Virtualization enabled
Nov 29 09:24:10 np0005539860 kernel: kvm_amd: Nested Paging enabled
Nov 29 09:24:10 np0005539860 kernel: kvm_amd: LBR virtualization supported
Nov 29 09:24:10 np0005539860 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Nov 29 09:24:10 np0005539860 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Nov 29 09:24:10 np0005539860 iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Nov 29 09:24:10 np0005539860 systemd[1]: Finished IPv4 firewall with iptables.
Nov 29 09:24:10 np0005539860 kernel: Console: switching to colour dummy device 80x25
Nov 29 09:24:10 np0005539860 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Nov 29 09:24:10 np0005539860 kernel: [drm] features: -context_init
Nov 29 09:24:10 np0005539860 kernel: [drm] number of scanouts: 1
Nov 29 09:24:10 np0005539860 kernel: [drm] number of cap sets: 0
Nov 29 09:24:10 np0005539860 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Nov 29 09:24:10 np0005539860 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Nov 29 09:24:10 np0005539860 kernel: Console: switching to colour frame buffer device 128x48
Nov 29 09:24:10 np0005539860 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Nov 29 09:24:11 np0005539860 cloud-init[840]: Cloud-init v. 24.4-7.el9 running 'init-local' at Sat, 29 Nov 2025 14:24:11 +0000. Up 6.73 seconds.
Nov 29 09:24:11 np0005539860 systemd[1]: run-cloud\x2dinit-tmp-tmp_ire3mcv.mount: Deactivated successfully.
Nov 29 09:24:11 np0005539860 systemd[1]: Starting Hostname Service...
Nov 29 09:24:11 np0005539860 systemd[1]: Started Hostname Service.
Nov 29 09:24:11 np0005539860 systemd-hostnamed[854]: Hostname set to <np0005539860.novalocal> (static)
Nov 29 09:24:11 np0005539860 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Nov 29 09:24:11 np0005539860 systemd[1]: Reached target Preparation for Network.
Nov 29 09:24:11 np0005539860 systemd[1]: Starting Network Manager...
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.5975] NetworkManager (version 1.54.1-1.el9) is starting... (boot:8fdefcd0-656c-425f-85db-4aad72467491)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.5981] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6094] manager[0x563291685080]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6137] hostname: hostname: using hostnamed
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6137] hostname: static hostname changed from (none) to "np0005539860.novalocal"
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6145] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6260] manager[0x563291685080]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6261] manager[0x563291685080]: rfkill: WWAN hardware radio set enabled
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6331] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6332] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6333] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6334] manager: Networking is enabled by state file
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6338] settings: Loaded settings plugin: keyfile (internal)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6353] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6384] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6404] dhcp: init: Using DHCP client 'internal'
Nov 29 09:24:11 np0005539860 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6409] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6432] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6443] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6457] device (lo): Activation: starting connection 'lo' (960ffe02-1dfc-4f61-974b-5b08f23a4149)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6473] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6479] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6524] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6531] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6534] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6536] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6539] device (eth0): carrier: link connected
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6543] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6551] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6557] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6562] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6562] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6565] manager: NetworkManager state is now CONNECTING
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6566] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6572] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6574] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 09:24:11 np0005539860 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 09:24:11 np0005539860 systemd[1]: Started Network Manager.
Nov 29 09:24:11 np0005539860 systemd[1]: Reached target Network.
Nov 29 09:24:11 np0005539860 systemd[1]: Starting Network Manager Wait Online...
Nov 29 09:24:11 np0005539860 systemd[1]: Starting GSSAPI Proxy Daemon...
Nov 29 09:24:11 np0005539860 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6957] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6959] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 09:24:11 np0005539860 NetworkManager[858]: <info>  [1764426251.6970] device (lo): Activation: successful, device activated.
Nov 29 09:24:11 np0005539860 systemd[1]: Started GSSAPI Proxy Daemon.
Nov 29 09:24:11 np0005539860 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Nov 29 09:24:11 np0005539860 systemd[1]: Reached target NFS client services.
Nov 29 09:24:11 np0005539860 systemd[1]: Reached target Preparation for Remote File Systems.
Nov 29 09:24:11 np0005539860 systemd[1]: Reached target Remote File Systems.
Nov 29 09:24:11 np0005539860 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3399] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3412] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3435] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3497] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3500] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3503] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3507] device (eth0): Activation: successful, device activated.
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3511] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 09:24:12 np0005539860 NetworkManager[858]: <info>  [1764426252.3513] manager: startup complete
Nov 29 09:24:12 np0005539860 systemd[1]: Finished Network Manager Wait Online.
Nov 29 09:24:12 np0005539860 systemd[1]: Starting Cloud-init: Network Stage...
Nov 29 09:24:12 np0005539860 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Sat, 29 Nov 2025 14:24:12 +0000. Up 8.36 seconds.
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |  eth0  | True |         38.102.83.64         | 255.255.255.0 | global | fa:16:3e:30:9f:cd |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe30:9fcd/64 |       .       |  link  | fa:16:3e:30:9f:cd |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Nov 29 09:24:12 np0005539860 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Nov 29 09:24:14 np0005539860 cloud-init[921]: Generating public/private rsa key pair.
Nov 29 09:24:14 np0005539860 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Nov 29 09:24:14 np0005539860 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Nov 29 09:24:14 np0005539860 cloud-init[921]: The key fingerprint is:
Nov 29 09:24:14 np0005539860 cloud-init[921]: SHA256:SWP5vQXIkgu3cPEWfvDt3f6G5L42S/LbZHsW+Q/WXK4 root@np0005539860.novalocal
Nov 29 09:24:14 np0005539860 cloud-init[921]: The key's randomart image is:
Nov 29 09:24:14 np0005539860 cloud-init[921]: +---[RSA 3072]----+
Nov 29 09:24:14 np0005539860 cloud-init[921]: |        . o      |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |         B = .   |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |      o X * + .  |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |       B O o o ..|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |        S . . o =|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |             o.B.|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |            oo+.X|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |             ==*B|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |             oEO*|
Nov 29 09:24:14 np0005539860 cloud-init[921]: +----[SHA256]-----+
Nov 29 09:24:14 np0005539860 cloud-init[921]: Generating public/private ecdsa key pair.
Nov 29 09:24:14 np0005539860 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Nov 29 09:24:14 np0005539860 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Nov 29 09:24:14 np0005539860 cloud-init[921]: The key fingerprint is:
Nov 29 09:24:14 np0005539860 cloud-init[921]: SHA256:PDf46BQN46Ua/Dm6b4s4k5ONu0bIJHEMplf+KklEu8o root@np0005539860.novalocal
Nov 29 09:24:14 np0005539860 cloud-init[921]: The key's randomart image is:
Nov 29 09:24:14 np0005539860 cloud-init[921]: +---[ECDSA 256]---+
Nov 29 09:24:14 np0005539860 cloud-init[921]: |.+. .            |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |+.o+             |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |.o+ .   o .      |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |.o.. o o B       |
Nov 29 09:24:14 np0005539860 cloud-init[921]: | +o.  + S +      |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |.oo... + B .     |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |.Eo..=. * .      |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |   .O..=..       |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |   .+*+++.       |
Nov 29 09:24:14 np0005539860 cloud-init[921]: +----[SHA256]-----+
Nov 29 09:24:14 np0005539860 cloud-init[921]: Generating public/private ed25519 key pair.
Nov 29 09:24:14 np0005539860 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Nov 29 09:24:14 np0005539860 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Nov 29 09:24:14 np0005539860 cloud-init[921]: The key fingerprint is:
Nov 29 09:24:14 np0005539860 cloud-init[921]: SHA256:a6q9NxsNZrZEV0A5kbH/JddCAfqNrGsPa4KuBYS3k1I root@np0005539860.novalocal
Nov 29 09:24:14 np0005539860 cloud-init[921]: The key's randomart image is:
Nov 29 09:24:14 np0005539860 cloud-init[921]: +--[ED25519 256]--+
Nov 29 09:24:14 np0005539860 cloud-init[921]: |         .=*o..  |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |   .      ++   . |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |  . E   . +.  .  |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |   + o . . + +  .|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |  . =   S   = + +|
Nov 29 09:24:14 np0005539860 cloud-init[921]: |   . o = = . . = |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |      ..= +   .  |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |     o.o+.o+     |
Nov 29 09:24:14 np0005539860 cloud-init[921]: |    o+=o.*o..    |
Nov 29 09:24:14 np0005539860 cloud-init[921]: +----[SHA256]-----+
Nov 29 09:24:14 np0005539860 systemd[1]: Finished Cloud-init: Network Stage.
Nov 29 09:24:14 np0005539860 systemd[1]: Reached target Cloud-config availability.
Nov 29 09:24:14 np0005539860 systemd[1]: Reached target Network is Online.
Nov 29 09:24:14 np0005539860 systemd[1]: Starting Cloud-init: Config Stage...
Nov 29 09:24:14 np0005539860 systemd[1]: Starting Crash recovery kernel arming...
Nov 29 09:24:14 np0005539860 systemd[1]: Starting Notify NFS peers of a restart...
Nov 29 09:24:14 np0005539860 systemd[1]: Starting System Logging Service...
Nov 29 09:24:14 np0005539860 sm-notify[1005]: Version 2.5.4 starting
Nov 29 09:24:14 np0005539860 systemd[1]: Starting OpenSSH server daemon...
Nov 29 09:24:14 np0005539860 systemd[1]: Starting Permit User Sessions...
Nov 29 09:24:14 np0005539860 systemd[1]: Started Notify NFS peers of a restart.
Nov 29 09:24:14 np0005539860 systemd[1]: Started OpenSSH server daemon.
Nov 29 09:24:14 np0005539860 systemd[1]: Finished Permit User Sessions.
Nov 29 09:24:14 np0005539860 systemd[1]: Started Command Scheduler.
Nov 29 09:24:14 np0005539860 systemd[1]: Started Getty on tty1.
Nov 29 09:24:14 np0005539860 systemd[1]: Started Serial Getty on ttyS0.
Nov 29 09:24:14 np0005539860 systemd[1]: Reached target Login Prompts.
Nov 29 09:24:14 np0005539860 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Nov 29 09:24:14 np0005539860 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Nov 29 09:24:14 np0005539860 systemd[1]: Started System Logging Service.
Nov 29 09:24:14 np0005539860 systemd[1]: Reached target Multi-User System.
Nov 29 09:24:14 np0005539860 systemd[1]: Starting Record Runlevel Change in UTMP...
Nov 29 09:24:14 np0005539860 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Nov 29 09:24:14 np0005539860 systemd[1]: Finished Record Runlevel Change in UTMP.
Nov 29 09:24:14 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 09:24:14 np0005539860 kdumpctl[1015]: kdump: No kdump initial ramdisk found.
Nov 29 09:24:14 np0005539860 kdumpctl[1015]: kdump: Rebuilding /boot/initramfs-5.14.0-642.el9.x86_64kdump.img
Nov 29 09:24:14 np0005539860 cloud-init[1123]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Sat, 29 Nov 2025 14:24:14 +0000. Up 10.37 seconds.
Nov 29 09:24:14 np0005539860 systemd[1]: Finished Cloud-init: Config Stage.
Nov 29 09:24:14 np0005539860 systemd[1]: Starting Cloud-init: Final Stage...
Nov 29 09:24:15 np0005539860 cloud-init[1265]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Sat, 29 Nov 2025 14:24:15 +0000. Up 10.77 seconds.
Nov 29 09:24:15 np0005539860 dracut[1269]: dracut-057-102.git20250818.el9
Nov 29 09:24:15 np0005539860 cloud-init[1284]: #############################################################
Nov 29 09:24:15 np0005539860 cloud-init[1287]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Nov 29 09:24:15 np0005539860 cloud-init[1289]: 256 SHA256:PDf46BQN46Ua/Dm6b4s4k5ONu0bIJHEMplf+KklEu8o root@np0005539860.novalocal (ECDSA)
Nov 29 09:24:15 np0005539860 cloud-init[1291]: 256 SHA256:a6q9NxsNZrZEV0A5kbH/JddCAfqNrGsPa4KuBYS3k1I root@np0005539860.novalocal (ED25519)
Nov 29 09:24:15 np0005539860 cloud-init[1293]: 3072 SHA256:SWP5vQXIkgu3cPEWfvDt3f6G5L42S/LbZHsW+Q/WXK4 root@np0005539860.novalocal (RSA)
Nov 29 09:24:15 np0005539860 cloud-init[1295]: -----END SSH HOST KEY FINGERPRINTS-----
Nov 29 09:24:15 np0005539860 cloud-init[1296]: #############################################################
Nov 29 09:24:15 np0005539860 cloud-init[1265]: Cloud-init v. 24.4-7.el9 finished at Sat, 29 Nov 2025 14:24:15 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.95 seconds
Nov 29 09:24:15 np0005539860 systemd[1]: Finished Cloud-init: Final Stage.
Nov 29 09:24:15 np0005539860 systemd[1]: Reached target Cloud-init target.
Nov 29 09:24:15 np0005539860 dracut[1271]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/b277050f-8ace-464d-abb6-4c46d4c45253 /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-642.el9.x86_64kdump.img 5.14.0-642.el9.x86_64
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 09:24:15 np0005539860 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: memstrack is not available
Nov 29 09:24:16 np0005539860 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Nov 29 09:24:16 np0005539860 dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Nov 29 09:24:17 np0005539860 dracut[1271]: memstrack is not available
Nov 29 09:24:17 np0005539860 dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Nov 29 09:24:17 np0005539860 dracut[1271]: *** Including module: systemd ***
Nov 29 09:24:17 np0005539860 chronyd[802]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Nov 29 09:24:17 np0005539860 chronyd[802]: System clock TAI offset set to 37 seconds
Nov 29 09:24:17 np0005539860 dracut[1271]: *** Including module: fips ***
Nov 29 09:24:18 np0005539860 dracut[1271]: *** Including module: systemd-initrd ***
Nov 29 09:24:18 np0005539860 dracut[1271]: *** Including module: i18n ***
Nov 29 09:24:18 np0005539860 dracut[1271]: *** Including module: drm ***
Nov 29 09:24:18 np0005539860 dracut[1271]: *** Including module: prefixdevname ***
Nov 29 09:24:18 np0005539860 dracut[1271]: *** Including module: kernel-modules ***
Nov 29 09:24:19 np0005539860 kernel: block vda: the capability attribute has been deprecated.
Nov 29 09:24:19 np0005539860 dracut[1271]: *** Including module: kernel-modules-extra ***
Nov 29 09:24:19 np0005539860 dracut[1271]: *** Including module: qemu ***
Nov 29 09:24:19 np0005539860 dracut[1271]: *** Including module: fstab-sys ***
Nov 29 09:24:19 np0005539860 dracut[1271]: *** Including module: rootfs-block ***
Nov 29 09:24:19 np0005539860 dracut[1271]: *** Including module: terminfo ***
Nov 29 09:24:19 np0005539860 dracut[1271]: *** Including module: udev-rules ***
Nov 29 09:24:20 np0005539860 dracut[1271]: Skipping udev rule: 91-permissions.rules
Nov 29 09:24:20 np0005539860 dracut[1271]: Skipping udev rule: 80-drivers-modprobe.rules
Nov 29 09:24:20 np0005539860 dracut[1271]: *** Including module: virtiofs ***
Nov 29 09:24:20 np0005539860 dracut[1271]: *** Including module: dracut-systemd ***
Nov 29 09:24:21 np0005539860 dracut[1271]: *** Including module: usrmount ***
Nov 29 09:24:21 np0005539860 dracut[1271]: *** Including module: base ***
Nov 29 09:24:21 np0005539860 dracut[1271]: *** Including module: fs-lib ***
Nov 29 09:24:21 np0005539860 dracut[1271]: *** Including module: kdumpbase ***
Nov 29 09:24:21 np0005539860 irqbalance[789]: Cannot change IRQ 25 affinity: Operation not permitted
Nov 29 09:24:21 np0005539860 irqbalance[789]: IRQ 25 affinity is now unmanaged
Nov 29 09:24:21 np0005539860 irqbalance[789]: Cannot change IRQ 31 affinity: Operation not permitted
Nov 29 09:24:21 np0005539860 irqbalance[789]: IRQ 31 affinity is now unmanaged
Nov 29 09:24:21 np0005539860 irqbalance[789]: Cannot change IRQ 28 affinity: Operation not permitted
Nov 29 09:24:21 np0005539860 irqbalance[789]: IRQ 28 affinity is now unmanaged
Nov 29 09:24:21 np0005539860 irqbalance[789]: Cannot change IRQ 32 affinity: Operation not permitted
Nov 29 09:24:21 np0005539860 irqbalance[789]: IRQ 32 affinity is now unmanaged
Nov 29 09:24:21 np0005539860 irqbalance[789]: Cannot change IRQ 30 affinity: Operation not permitted
Nov 29 09:24:21 np0005539860 irqbalance[789]: IRQ 30 affinity is now unmanaged
Nov 29 09:24:21 np0005539860 irqbalance[789]: Cannot change IRQ 29 affinity: Operation not permitted
Nov 29 09:24:21 np0005539860 irqbalance[789]: IRQ 29 affinity is now unmanaged
Nov 29 09:24:21 np0005539860 dracut[1271]: *** Including module: microcode_ctl-fw_dir_override ***
Nov 29 09:24:21 np0005539860 dracut[1271]:  microcode_ctl module: mangling fw_dir
Nov 29 09:24:21 np0005539860 dracut[1271]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Nov 29 09:24:21 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Nov 29 09:24:21 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel" is ignored
Nov 29 09:24:21 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Nov 29 09:24:21 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Nov 29 09:24:21 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Nov 29 09:24:22 np0005539860 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Nov 29 09:24:22 np0005539860 dracut[1271]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Nov 29 09:24:22 np0005539860 dracut[1271]: *** Including module: openssl ***
Nov 29 09:24:22 np0005539860 dracut[1271]: *** Including module: shutdown ***
Nov 29 09:24:22 np0005539860 dracut[1271]: *** Including module: squash ***
Nov 29 09:24:22 np0005539860 dracut[1271]: *** Including modules done ***
Nov 29 09:24:22 np0005539860 dracut[1271]: *** Installing kernel module dependencies ***
Nov 29 09:24:23 np0005539860 dracut[1271]: *** Installing kernel module dependencies done ***
Nov 29 09:24:23 np0005539860 dracut[1271]: *** Resolving executable dependencies ***
Nov 29 09:24:25 np0005539860 dracut[1271]: *** Resolving executable dependencies done ***
Nov 29 09:24:25 np0005539860 dracut[1271]: *** Generating early-microcode cpio image ***
Nov 29 09:24:25 np0005539860 dracut[1271]: *** Store current command line parameters ***
Nov 29 09:24:25 np0005539860 dracut[1271]: Stored kernel commandline:
Nov 29 09:24:25 np0005539860 dracut[1271]: No dracut internal kernel commandline stored in the initramfs
Nov 29 09:24:25 np0005539860 dracut[1271]: *** Install squash loader ***
Nov 29 09:24:26 np0005539860 dracut[1271]: *** Squashing the files inside the initramfs ***
Nov 29 09:24:27 np0005539860 dracut[1271]: *** Squashing the files inside the initramfs done ***
Nov 29 09:24:27 np0005539860 dracut[1271]: *** Creating image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' ***
Nov 29 09:24:27 np0005539860 dracut[1271]: *** Hardlinking files ***
Nov 29 09:24:27 np0005539860 dracut[1271]: *** Hardlinking files done ***
Nov 29 09:24:28 np0005539860 dracut[1271]: *** Creating initramfs image file '/boot/initramfs-5.14.0-642.el9.x86_64kdump.img' done ***
Nov 29 09:24:28 np0005539860 kdumpctl[1015]: kdump: kexec: loaded kdump kernel
Nov 29 09:24:28 np0005539860 kdumpctl[1015]: kdump: Starting kdump: [OK]
Nov 29 09:24:28 np0005539860 systemd[1]: Finished Crash recovery kernel arming.
Nov 29 09:24:28 np0005539860 systemd[1]: Startup finished in 1.645s (kernel) + 2.854s (initrd) + 20.066s (userspace) = 24.565s.
Nov 29 09:24:41 np0005539860 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 09:24:47 np0005539860 systemd[1]: Created slice User Slice of UID 1000.
Nov 29 09:24:47 np0005539860 systemd[1]: Starting User Runtime Directory /run/user/1000...
Nov 29 09:24:47 np0005539860 systemd-logind[794]: New session 1 of user zuul.
Nov 29 09:24:47 np0005539860 systemd[1]: Finished User Runtime Directory /run/user/1000.
Nov 29 09:24:47 np0005539860 systemd[1]: Starting User Manager for UID 1000...
Nov 29 09:24:47 np0005539860 systemd[4301]: Queued start job for default target Main User Target.
Nov 29 09:24:47 np0005539860 systemd[4301]: Created slice User Application Slice.
Nov 29 09:24:47 np0005539860 systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Nov 29 09:24:47 np0005539860 systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 09:24:47 np0005539860 systemd[4301]: Reached target Paths.
Nov 29 09:24:47 np0005539860 systemd[4301]: Reached target Timers.
Nov 29 09:24:47 np0005539860 systemd[4301]: Starting D-Bus User Message Bus Socket...
Nov 29 09:24:47 np0005539860 systemd[4301]: Starting Create User's Volatile Files and Directories...
Nov 29 09:24:47 np0005539860 systemd[4301]: Finished Create User's Volatile Files and Directories.
Nov 29 09:24:47 np0005539860 systemd[4301]: Listening on D-Bus User Message Bus Socket.
Nov 29 09:24:47 np0005539860 systemd[4301]: Reached target Sockets.
Nov 29 09:24:47 np0005539860 systemd[4301]: Reached target Basic System.
Nov 29 09:24:47 np0005539860 systemd[4301]: Reached target Main User Target.
Nov 29 09:24:47 np0005539860 systemd[4301]: Startup finished in 173ms.
Nov 29 09:24:47 np0005539860 systemd[1]: Started User Manager for UID 1000.
Nov 29 09:24:47 np0005539860 systemd[1]: Started Session 1 of User zuul.
Nov 29 09:24:47 np0005539860 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:24:50 np0005539860 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:24:56 np0005539860 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:24:57 np0005539860 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Nov 29 09:24:59 np0005539860 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC4jEXH7SCdzTvYwiTGbwwqzvQv/1f5T1+8wCpBHqDfljNbCR8MkSFYuprDtSkhvq6mfHfP633Vc8Os4xVLqAdNybJUNSLjFCzKTb42S60x02I6AoE4GMqc9K7ivtQQGBEQIkHbPlKLAlZMqotAzhAf5mCU/FfITVfyTwNlOEbM1NXZX4R5Slb+FNbuhDFop4WzmRloMf+dfjmm4ObTTaUQufiB0aUf8ZkAfK9XecypM86D/nrqZPfjArUacaKUSRxrf5IvjE5fCJB4NTx4EkG42mBXw0XtV5u72bvCWmTSU9yN5frqwM/I6kJZZYpAd762hCkL4dueoCwbK/hMocnn7xt3P0YgOwgLZFSU1gt+Wo9ZJ5yHlbATrw+ehYrIV4/QxVM1tOq5aTiC3AVmvLvShkg0aQORnG30CGOcLFi4ssJubIyLcKQDCV1IV1bTw9rbbsQTPzLcAg9eijd0yrlz4/pShndyeWHN4H7oRkWxLracWk5t+8zJw4GMbha909M= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:24:59 np0005539860 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:00 np0005539860 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:00 np0005539860 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764426299.6762817-207-90094301518321/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=c3068d7deebb460b803ad04f3276ec0a_id_rsa follow=False checksum=901d87a09c2963a936da130f6e64976e0bd13942 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:00 np0005539860 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:01 np0005539860 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764426300.6399071-240-260022704277284/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=c3068d7deebb460b803ad04f3276ec0a_id_rsa.pub follow=False checksum=e466e5bfbbee13711977b730f4bc6b8c769774d6 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:02 np0005539860 python3[4971]: ansible-ping Invoked with data=pong
Nov 29 09:25:03 np0005539860 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:25:06 np0005539860 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Nov 29 09:25:06 np0005539860 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:07 np0005539860 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:07 np0005539860 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:07 np0005539860 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:08 np0005539860 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:08 np0005539860 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:09 np0005539860 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:10 np0005539860 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:10 np0005539860 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764426310.0111923-21-179163398419525/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:11 np0005539860 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:11 np0005539860 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:12 np0005539860 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:12 np0005539860 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:12 np0005539860 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:13 np0005539860 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:13 np0005539860 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:13 np0005539860 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:13 np0005539860 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:14 np0005539860 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:14 np0005539860 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:14 np0005539860 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:15 np0005539860 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:15 np0005539860 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:15 np0005539860 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:15 np0005539860 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:16 np0005539860 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:16 np0005539860 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:16 np0005539860 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:17 np0005539860 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:17 np0005539860 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:17 np0005539860 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:17 np0005539860 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:18 np0005539860 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:18 np0005539860 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:18 np0005539860 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:25:21 np0005539860 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 09:25:21 np0005539860 systemd[1]: Starting Time & Date Service...
Nov 29 09:25:21 np0005539860 systemd[1]: Started Time & Date Service.
Nov 29 09:25:21 np0005539860 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Nov 29 09:25:21 np0005539860 python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:22 np0005539860 python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:22 np0005539860 python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1764426322.1213999-153-7078639153597/source _original_basename=tmpq4n4jvur follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:23 np0005539860 python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:23 np0005539860 python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764426323.0524924-183-191790507039069/source _original_basename=tmpihz1jrkp follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:24 np0005539860 python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:24 np0005539860 python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1764426324.2201529-231-35250478190487/source _original_basename=tmpy8hfgido follow=False checksum=d1fb5b4f9f73b8c84cf3b5af0e2af5367a435780 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:25 np0005539860 python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:25:25 np0005539860 python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:25:26 np0005539860 python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:25:26 np0005539860 python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1764426325.9393344-273-15847578089438/source _original_basename=tmp4h4209gm follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:27 np0005539860 python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163e3b-3c83-99db-5072-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:25:27 np0005539860 python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-99db-5072-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Nov 29 09:25:29 np0005539860 python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:45 np0005539860 python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:25:51 np0005539860 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Nov 29 09:26:18 np0005539860 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Nov 29 09:26:18 np0005539860 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2677] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 09:26:18 np0005539860 systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2892] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2911] settings: (eth1): created default wired connection 'Wired connection 1'
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2913] device (eth1): carrier: link connected
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2914] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2918] policy: auto-activating connection 'Wired connection 1' (f4a45717-a9d3-3a47-bdef-eed90f186bef)
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2921] device (eth1): Activation: starting connection 'Wired connection 1' (f4a45717-a9d3-3a47-bdef-eed90f186bef)
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2921] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2923] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2925] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 09:26:18 np0005539860 NetworkManager[858]: <info>  [1764426378.2928] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 09:26:21 np0005539860 python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163e3b-3c83-6a8a-82f2-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:26:28 np0005539860 python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:26:28 np0005539860 python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764426388.0501738-102-214839764838640/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=1ae21c474085844d5ecb1b1465ca30c2887af393 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:26:29 np0005539860 python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 09:26:29 np0005539860 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 09:26:29 np0005539860 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 09:26:29 np0005539860 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 09:26:29 np0005539860 systemd[1]: Stopping Network Manager...
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6769] caught SIGTERM, shutting down normally.
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6778] dhcp4 (eth0): canceled DHCP transaction
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6778] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6778] dhcp4 (eth0): state changed no lease
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6780] manager: NetworkManager state is now CONNECTING
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6847] dhcp4 (eth1): canceled DHCP transaction
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6847] dhcp4 (eth1): state changed no lease
Nov 29 09:26:29 np0005539860 NetworkManager[858]: <info>  [1764426389.6917] exiting (success)
Nov 29 09:26:29 np0005539860 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 09:26:29 np0005539860 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 09:26:29 np0005539860 systemd[1]: Stopped Network Manager.
Nov 29 09:26:29 np0005539860 systemd[1]: NetworkManager.service: Consumed 1.069s CPU time, 10.0M memory peak.
Nov 29 09:26:29 np0005539860 systemd[1]: Starting Network Manager...
Nov 29 09:26:29 np0005539860 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.7548] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:8fdefcd0-656c-425f-85db-4aad72467491)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.7552] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.7630] manager[0x55fa87738070]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 09:26:29 np0005539860 systemd[1]: Starting Hostname Service...
Nov 29 09:26:29 np0005539860 systemd[1]: Started Hostname Service.
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8792] hostname: hostname: using hostnamed
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8793] hostname: static hostname changed from (none) to "np0005539860.novalocal"
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8799] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8806] manager[0x55fa87738070]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8806] manager[0x55fa87738070]: rfkill: WWAN hardware radio set enabled
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8850] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8850] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8851] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8851] manager: Networking is enabled by state file
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8854] settings: Loaded settings plugin: keyfile (internal)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8862] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8911] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8929] dhcp: init: Using DHCP client 'internal'
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8934] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8942] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8952] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8964] device (lo): Activation: starting connection 'lo' (960ffe02-1dfc-4f61-974b-5b08f23a4149)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8975] device (eth0): carrier: link connected
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8986] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8996] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.8997] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9008] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9020] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9030] device (eth1): carrier: link connected
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9037] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9044] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (f4a45717-a9d3-3a47-bdef-eed90f186bef) (indicated)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9044] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9052] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9064] device (eth1): Activation: starting connection 'Wired connection 1' (f4a45717-a9d3-3a47-bdef-eed90f186bef)
Nov 29 09:26:29 np0005539860 systemd[1]: Started Network Manager.
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9080] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9086] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9089] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9092] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9095] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9099] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9102] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9107] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9111] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9122] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9125] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9139] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9144] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9181] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9185] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9193] device (lo): Activation: successful, device activated.
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9203] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9212] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 09:26:29 np0005539860 systemd[1]: Starting Network Manager Wait Online...
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9313] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9344] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9350] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9359] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9370] device (eth0): Activation: successful, device activated.
Nov 29 09:26:29 np0005539860 NetworkManager[7177]: <info>  [1764426389.9380] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 09:26:30 np0005539860 python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163e3b-3c83-6a8a-82f2-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:26:40 np0005539860 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 09:26:59 np0005539860 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 09:27:01 np0005539860 systemd[4301]: Starting Mark boot as successful...
Nov 29 09:27:01 np0005539860 systemd[4301]: Finished Mark boot as successful.
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.2956] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 09:27:15 np0005539860 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 09:27:15 np0005539860 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3247] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3249] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3256] device (eth1): Activation: successful, device activated.
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3261] manager: startup complete
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3263] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <warn>  [1764426435.3269] device (eth1): Activation: failed for connection 'Wired connection 1'
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3275] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 systemd[1]: Finished Network Manager Wait Online.
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3367] dhcp4 (eth1): canceled DHCP transaction
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3368] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3369] dhcp4 (eth1): state changed no lease
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3397] policy: auto-activating connection 'ci-private-network' (2bbbff6e-5d91-5d09-a38a-62b587d04722)
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3406] device (eth1): Activation: starting connection 'ci-private-network' (2bbbff6e-5d91-5d09-a38a-62b587d04722)
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3408] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3413] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3426] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3444] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3503] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3511] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 09:27:15 np0005539860 NetworkManager[7177]: <info>  [1764426435.3528] device (eth1): Activation: successful, device activated.
Nov 29 09:27:23 np0005539860 python3[7402]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:27:24 np0005539860 python3[7475]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764426443.498927-259-190800394159528/source _original_basename=tmpm65oeqbt follow=False checksum=bc6feaf8ce580ff90c82f78b86e2e9463797b536 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:27:25 np0005539860 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 09:28:24 np0005539860 systemd-logind[794]: Session 1 logged out. Waiting for processes to exit.
Nov 29 09:30:01 np0005539860 systemd[4301]: Created slice User Background Tasks Slice.
Nov 29 09:30:01 np0005539860 systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Nov 29 09:30:01 np0005539860 systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Nov 29 09:30:48 np0005539860 chronyd[802]: Selected source 167.160.187.179 (2.centos.pool.ntp.org)
Nov 29 09:35:06 np0005539860 systemd-logind[794]: New session 3 of user zuul.
Nov 29 09:35:06 np0005539860 systemd[1]: Started Session 3 of User zuul.
Nov 29 09:35:07 np0005539860 python3[7571]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2516-7311-000000001cf0-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:35:07 np0005539860 python3[7600]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:35:07 np0005539860 python3[7626]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:35:08 np0005539860 python3[7652]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:35:08 np0005539860 python3[7678]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:35:08 np0005539860 python3[7704]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:35:09 np0005539860 python3[7782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:35:09 np0005539860 python3[7855]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764426909.0130334-494-261770381610246/source _original_basename=tmpawl6bhlz follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:35:10 np0005539860 python3[7905]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 09:35:10 np0005539860 systemd[1]: Reloading.
Nov 29 09:35:10 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:35:12 np0005539860 python3[7960]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Nov 29 09:35:12 np0005539860 python3[7986]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:35:13 np0005539860 python3[8014]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:35:13 np0005539860 python3[8042]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:35:13 np0005539860 python3[8070]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:35:14 np0005539860 python3[8097]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-2516-7311-000000001cf7-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:35:14 np0005539860 python3[8127]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 09:35:16 np0005539860 systemd-logind[794]: Session 3 logged out. Waiting for processes to exit.
Nov 29 09:35:16 np0005539860 systemd[1]: session-3.scope: Deactivated successfully.
Nov 29 09:35:16 np0005539860 systemd[1]: session-3.scope: Consumed 4.907s CPU time.
Nov 29 09:35:16 np0005539860 systemd-logind[794]: Removed session 3.
Nov 29 09:35:17 np0005539860 systemd-logind[794]: New session 4 of user zuul.
Nov 29 09:35:17 np0005539860 systemd[1]: Started Session 4 of User zuul.
Nov 29 09:35:18 np0005539860 python3[8162]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Nov 29 09:35:32 np0005539860 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 09:35:32 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 09:35:41 np0005539860 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 09:35:41 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 09:35:49 np0005539860 kernel: SELinux:  Converting 385 SID table entries...
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 09:35:49 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 09:35:51 np0005539860 setsebool[8231]: The virt_use_nfs policy boolean was changed to 1 by root
Nov 29 09:35:51 np0005539860 setsebool[8231]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Nov 29 09:36:02 np0005539860 kernel: SELinux:  Converting 388 SID table entries...
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 09:36:02 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 09:36:19 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 09:36:19 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 09:36:20 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 09:36:20 np0005539860 systemd[1]: Reloading.
Nov 29 09:36:20 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:36:20 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 09:36:23 np0005539860 python3[11228]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163e3b-3c83-cbcb-8ea1-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:36:24 np0005539860 kernel: evm: overlay not supported
Nov 29 09:36:24 np0005539860 systemd[4301]: Starting D-Bus User Message Bus...
Nov 29 09:36:24 np0005539860 dbus-broker-launch[12037]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Nov 29 09:36:24 np0005539860 dbus-broker-launch[12037]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Nov 29 09:36:24 np0005539860 systemd[4301]: Started D-Bus User Message Bus.
Nov 29 09:36:24 np0005539860 dbus-broker-lau[12037]: Ready
Nov 29 09:36:24 np0005539860 systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Nov 29 09:36:24 np0005539860 systemd[4301]: Created slice Slice /user.
Nov 29 09:36:24 np0005539860 systemd[4301]: podman-11943.scope: unit configures an IP firewall, but not running as root.
Nov 29 09:36:24 np0005539860 systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Nov 29 09:36:24 np0005539860 systemd[4301]: Started podman-11943.scope.
Nov 29 09:36:24 np0005539860 systemd[4301]: Started podman-pause-dda99cea.scope.
Nov 29 09:36:24 np0005539860 python3[12524]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.51:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.51:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:36:24 np0005539860 python3[12524]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Nov 29 09:36:25 np0005539860 systemd[1]: session-4.scope: Deactivated successfully.
Nov 29 09:36:25 np0005539860 systemd[1]: session-4.scope: Consumed 1min 137ms CPU time.
Nov 29 09:36:25 np0005539860 systemd-logind[794]: Session 4 logged out. Waiting for processes to exit.
Nov 29 09:36:25 np0005539860 systemd-logind[794]: Removed session 4.
Nov 29 09:36:41 np0005539860 irqbalance[789]: Cannot change IRQ 27 affinity: Operation not permitted
Nov 29 09:36:41 np0005539860 irqbalance[789]: IRQ 27 affinity is now unmanaged
Nov 29 09:36:50 np0005539860 systemd-logind[794]: New session 5 of user zuul.
Nov 29 09:36:50 np0005539860 systemd[1]: Started Session 5 of User zuul.
Nov 29 09:36:50 np0005539860 python3[21533]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFANV8EUwsvC/SnFPvwMdQyRfkE5fMBTm6rNSz9nHB9dqxR6nz0ye0tawzKqQBvrtrtLsTZ+ClzBxaRVuwu+2DE= zuul@np0005539859.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:36:51 np0005539860 python3[21686]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFANV8EUwsvC/SnFPvwMdQyRfkE5fMBTm6rNSz9nHB9dqxR6nz0ye0tawzKqQBvrtrtLsTZ+ClzBxaRVuwu+2DE= zuul@np0005539859.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:36:52 np0005539860 python3[21985]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005539860.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Nov 29 09:36:52 np0005539860 python3[22178]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBFANV8EUwsvC/SnFPvwMdQyRfkE5fMBTm6rNSz9nHB9dqxR6nz0ye0tawzKqQBvrtrtLsTZ+ClzBxaRVuwu+2DE= zuul@np0005539859.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Nov 29 09:36:53 np0005539860 python3[22438]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:36:53 np0005539860 python3[22705]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764427012.73982-135-279273987080050/source _original_basename=tmpre4t0e95 follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:36:54 np0005539860 python3[22968]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Nov 29 09:36:54 np0005539860 systemd[1]: Starting Hostname Service...
Nov 29 09:36:54 np0005539860 systemd[1]: Started Hostname Service.
Nov 29 09:36:54 np0005539860 systemd-hostnamed[23060]: Changed pretty hostname to 'compute-0'
Nov 29 09:36:54 np0005539860 systemd-hostnamed[23060]: Hostname set to <compute-0> (static)
Nov 29 09:36:54 np0005539860 NetworkManager[7177]: <info>  [1764427014.6598] hostname: static hostname changed from "np0005539860.novalocal" to "compute-0"
Nov 29 09:36:54 np0005539860 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 09:36:54 np0005539860 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 09:36:55 np0005539860 systemd-logind[794]: Session 5 logged out. Waiting for processes to exit.
Nov 29 09:36:55 np0005539860 systemd[1]: session-5.scope: Deactivated successfully.
Nov 29 09:36:55 np0005539860 systemd[1]: session-5.scope: Consumed 2.757s CPU time.
Nov 29 09:36:55 np0005539860 systemd-logind[794]: Removed session 5.
Nov 29 09:37:04 np0005539860 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 09:37:16 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 09:37:16 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 09:37:16 np0005539860 systemd[1]: man-db-cache-update.service: Consumed 1min 9.601s CPU time.
Nov 29 09:37:16 np0005539860 systemd[1]: run-r4eb158124e1d43b0b8ed53be7815325a.service: Deactivated successfully.
Nov 29 09:37:24 np0005539860 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 09:39:51 np0005539860 systemd[1]: Starting Cleanup of Temporary Directories...
Nov 29 09:39:51 np0005539860 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Nov 29 09:39:51 np0005539860 systemd[1]: Finished Cleanup of Temporary Directories.
Nov 29 09:39:51 np0005539860 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Nov 29 09:41:29 np0005539860 systemd-logind[794]: New session 6 of user zuul.
Nov 29 09:41:29 np0005539860 systemd[1]: Started Session 6 of User zuul.
Nov 29 09:41:29 np0005539860 python3[30068]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:41:31 np0005539860 python3[30184]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:32 np0005539860 python3[30257]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=delorean.repo follow=False checksum=a16f090252000d02a7f7d540bb10f7c1c9cd4ac5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:41:32 np0005539860 python3[30283]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:33 np0005539860 python3[30356]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=0bdbb813b840548359ae77c28d76ca272ccaf31b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:41:33 np0005539860 python3[30382]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:33 np0005539860 python3[30455]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:41:34 np0005539860 python3[30481]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:34 np0005539860 python3[30554]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:41:34 np0005539860 python3[30580]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:35 np0005539860 python3[30653]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:41:35 np0005539860 python3[30679]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:36 np0005539860 python3[30752]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:41:36 np0005539860 python3[30778]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Nov 29 09:41:37 np0005539860 python3[30851]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1764427291.3393106-33576-201248288610385/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=25e801a9a05537c191e2aa500f19076ac31d3e5b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:44:23 np0005539860 python3[30913]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:49:23 np0005539860 systemd-logind[794]: Session 6 logged out. Waiting for processes to exit.
Nov 29 09:49:23 np0005539860 systemd[1]: session-6.scope: Deactivated successfully.
Nov 29 09:49:23 np0005539860 systemd[1]: session-6.scope: Consumed 6.265s CPU time.
Nov 29 09:49:23 np0005539860 systemd-logind[794]: Removed session 6.
Nov 29 09:56:01 np0005539860 systemd-logind[794]: New session 7 of user zuul.
Nov 29 09:56:01 np0005539860 systemd[1]: Started Session 7 of User zuul.
Nov 29 09:56:02 np0005539860 python3.9[31085]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:56:04 np0005539860 python3.9[31266]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:56:11 np0005539860 systemd[1]: session-7.scope: Deactivated successfully.
Nov 29 09:56:11 np0005539860 systemd[1]: session-7.scope: Consumed 8.374s CPU time.
Nov 29 09:56:11 np0005539860 systemd-logind[794]: Session 7 logged out. Waiting for processes to exit.
Nov 29 09:56:11 np0005539860 systemd-logind[794]: Removed session 7.
Nov 29 09:56:17 np0005539860 systemd-logind[794]: New session 8 of user zuul.
Nov 29 09:56:17 np0005539860 systemd[1]: Started Session 8 of User zuul.
Nov 29 09:56:18 np0005539860 python3.9[31476]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:56:19 np0005539860 systemd[1]: session-8.scope: Deactivated successfully.
Nov 29 09:56:19 np0005539860 systemd-logind[794]: Session 8 logged out. Waiting for processes to exit.
Nov 29 09:56:19 np0005539860 systemd-logind[794]: Removed session 8.
Nov 29 09:56:35 np0005539860 systemd-logind[794]: New session 9 of user zuul.
Nov 29 09:56:35 np0005539860 systemd[1]: Started Session 9 of User zuul.
Nov 29 09:56:36 np0005539860 python3.9[31656]: ansible-ansible.legacy.ping Invoked with data=pong
Nov 29 09:56:37 np0005539860 python3.9[31830]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:56:38 np0005539860 python3.9[31982]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:56:40 np0005539860 python3.9[32135]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 09:56:41 np0005539860 python3.9[32287]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:56:42 np0005539860 python3.9[32439]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 09:56:42 np0005539860 python3.9[32562]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428201.5499458-73-252569733839678/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:56:43 np0005539860 python3.9[32714]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:56:44 np0005539860 python3.9[32870]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:56:45 np0005539860 python3.9[33022]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:56:46 np0005539860 python3.9[33172]: ansible-ansible.builtin.service_facts Invoked
Nov 29 09:56:55 np0005539860 python3.9[33425]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:56:56 np0005539860 python3.9[33575]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:56:57 np0005539860 python3.9[33730]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:56:59 np0005539860 python3.9[33888]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 09:57:00 np0005539860 python3.9[33972]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 09:57:42 np0005539860 systemd[1]: Reloading.
Nov 29 09:57:42 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:57:42 np0005539860 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Nov 29 09:57:42 np0005539860 systemd[1]: Reloading.
Nov 29 09:57:42 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:57:42 np0005539860 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Nov 29 09:57:42 np0005539860 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Nov 29 09:57:42 np0005539860 systemd[1]: Reloading.
Nov 29 09:57:42 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:57:43 np0005539860 systemd[1]: Listening on LVM2 poll daemon socket.
Nov 29 09:57:43 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 09:57:43 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 09:57:43 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 09:58:44 np0005539860 kernel: SELinux:  Converting 2717 SID table entries...
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 09:58:44 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 09:58:44 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Nov 29 09:58:45 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 09:58:45 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 09:58:45 np0005539860 systemd[1]: Reloading.
Nov 29 09:58:45 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:58:45 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 09:58:46 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 09:58:46 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 09:58:46 np0005539860 systemd[1]: man-db-cache-update.service: Consumed 1.181s CPU time.
Nov 29 09:58:46 np0005539860 systemd[1]: run-r8100de08596b401da8019585031c016e.service: Deactivated successfully.
Nov 29 09:58:46 np0005539860 python3.9[35492]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:58:48 np0005539860 python3.9[35773]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Nov 29 09:58:49 np0005539860 python3.9[35925]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Nov 29 09:58:51 np0005539860 irqbalance[789]: Cannot change IRQ 26 affinity: Operation not permitted
Nov 29 09:58:51 np0005539860 irqbalance[789]: IRQ 26 affinity is now unmanaged
Nov 29 09:58:51 np0005539860 python3.9[36078]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:58:52 np0005539860 python3.9[36230]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Nov 29 09:58:54 np0005539860 python3.9[36382]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:58:55 np0005539860 python3.9[36534]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 09:58:58 np0005539860 python3.9[36657]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428334.4215584-236-252527184240357/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:59:00 np0005539860 python3.9[36809]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 09:59:01 np0005539860 python3.9[36961]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:02 np0005539860 python3.9[37114]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:59:03 np0005539860 python3.9[37266]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Nov 29 09:59:03 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 09:59:04 np0005539860 python3.9[37420]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 09:59:05 np0005539860 python3.9[37578]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 09:59:06 np0005539860 python3.9[37738]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Nov 29 09:59:07 np0005539860 python3.9[37891]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 09:59:08 np0005539860 python3.9[38049]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Nov 29 09:59:09 np0005539860 python3.9[38201]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 09:59:11 np0005539860 python3.9[38354]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:59:12 np0005539860 python3.9[38506]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 09:59:12 np0005539860 python3.9[38629]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428351.4566677-355-60421256194591/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:59:13 np0005539860 python3.9[38781]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 09:59:13 np0005539860 systemd[1]: Starting Load Kernel Modules...
Nov 29 09:59:14 np0005539860 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Nov 29 09:59:14 np0005539860 kernel: Bridge firewalling registered
Nov 29 09:59:14 np0005539860 systemd-modules-load[38785]: Inserted module 'br_netfilter'
Nov 29 09:59:14 np0005539860 systemd[1]: Finished Load Kernel Modules.
Nov 29 09:59:14 np0005539860 python3.9[38940]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 09:59:15 np0005539860 python3.9[39063]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428354.32955-378-70087521502929/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:59:16 np0005539860 python3.9[39215]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 09:59:19 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 09:59:19 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 09:59:20 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 09:59:20 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 09:59:20 np0005539860 systemd[1]: Reloading.
Nov 29 09:59:20 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:59:20 np0005539860 systemd[1]: Starting dnf makecache...
Nov 29 09:59:20 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 09:59:20 np0005539860 dnf[39288]: Failed determining last makecache time.
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-openstack-barbican-42b4c41831408a8e323 122 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 143 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-openstack-cinder-1c00d6490d88e436f26ef 130 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-python-stevedore-c4acc5639fd2329372142 136 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-python-cloudkitty-tests-tempest-2c80f8 149 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-os-net-config-9758ab42364673d01bc5014e 145 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 144 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-python-designate-tests-tempest-347fdbc 142 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-openstack-glance-1fd12c29b339f30fe823e 115 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 110 kB/s | 3.0 kB     00:00
Nov 29 09:59:20 np0005539860 dnf[39288]: delorean-openstack-manila-3c01b7181572c95dac462 113 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-python-whitebox-neutron-tests-tempest- 116 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-openstack-octavia-ba397f07a7331190208c 124 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-openstack-watcher-c014f81a8647287f6dcc 118 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-python-tcib-1124124ec06aadbac34f0d340b 121 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 118 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-openstack-swift-dc98a8463506ac520c469a 102 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-python-tempestconf-8515371b7cceebd4282 115 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: delorean-openstack-heat-ui-013accbfd179753bc3f0 121 kB/s | 3.0 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: CentOS Stream 9 - BaseOS                         27 kB/s | 7.3 kB     00:00
Nov 29 09:59:21 np0005539860 dnf[39288]: CentOS Stream 9 - AppStream                      68 kB/s | 7.4 kB     00:00
Nov 29 09:59:21 np0005539860 python3.9[40498]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 09:59:21 np0005539860 dnf[39288]: CentOS Stream 9 - CRB                            31 kB/s | 7.2 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: CentOS Stream 9 - Extras packages                75 kB/s | 8.3 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: dlrn-antelope-testing                            88 kB/s | 3.0 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: dlrn-antelope-build-deps                         96 kB/s | 3.0 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: centos9-rabbitmq                                114 kB/s | 3.0 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: centos9-storage                                 119 kB/s | 3.0 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: centos9-opstools                                113 kB/s | 3.0 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: NFV SIG OpenvSwitch                              35 kB/s | 3.0 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: repo-setup-centos-appstream                      93 kB/s | 4.4 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: repo-setup-centos-baseos                        178 kB/s | 3.9 kB     00:00
Nov 29 09:59:22 np0005539860 python3.9[41536]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Nov 29 09:59:22 np0005539860 dnf[39288]: repo-setup-centos-highavailability              157 kB/s | 3.9 kB     00:00
Nov 29 09:59:22 np0005539860 dnf[39288]: repo-setup-centos-powertools                    166 kB/s | 4.3 kB     00:00
Nov 29 09:59:23 np0005539860 dnf[39288]: Extra Packages for Enterprise Linux 9 - x86_64  135 kB/s |  33 kB     00:00
Nov 29 09:59:23 np0005539860 python3.9[42357]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 09:59:23 np0005539860 dnf[39288]: Metadata cache created.
Nov 29 09:59:23 np0005539860 systemd[1]: dnf-makecache.service: Deactivated successfully.
Nov 29 09:59:23 np0005539860 systemd[1]: Finished dnf makecache.
Nov 29 09:59:23 np0005539860 systemd[1]: dnf-makecache.service: Consumed 1.876s CPU time.
Nov 29 09:59:24 np0005539860 python3.9[43319]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:24 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 09:59:24 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 09:59:24 np0005539860 systemd[1]: man-db-cache-update.service: Consumed 5.035s CPU time.
Nov 29 09:59:24 np0005539860 systemd[1]: run-r8d0e9fc6944244a5bdb71da412056737.service: Deactivated successfully.
Nov 29 09:59:24 np0005539860 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 09:59:24 np0005539860 systemd[1]: Starting Authorization Manager...
Nov 29 09:59:24 np0005539860 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 09:59:24 np0005539860 polkitd[43668]: Started polkitd version 0.117
Nov 29 09:59:24 np0005539860 systemd[1]: Started Authorization Manager.
Nov 29 09:59:25 np0005539860 python3.9[43838]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 09:59:25 np0005539860 systemd[1]: Stopping Dynamic System Tuning Daemon...
Nov 29 09:59:25 np0005539860 systemd[1]: tuned.service: Deactivated successfully.
Nov 29 09:59:25 np0005539860 systemd[1]: Stopped Dynamic System Tuning Daemon.
Nov 29 09:59:26 np0005539860 systemd[1]: Starting Dynamic System Tuning Daemon...
Nov 29 09:59:26 np0005539860 systemd[1]: Started Dynamic System Tuning Daemon.
Nov 29 09:59:26 np0005539860 python3.9[44000]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Nov 29 09:59:29 np0005539860 python3.9[44152]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 09:59:29 np0005539860 systemd[1]: Reloading.
Nov 29 09:59:29 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:59:30 np0005539860 python3.9[44343]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 09:59:30 np0005539860 systemd[1]: Reloading.
Nov 29 09:59:30 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 09:59:31 np0005539860 python3.9[44533]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:32 np0005539860 python3.9[44686]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:32 np0005539860 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Nov 29 09:59:33 np0005539860 python3.9[44839]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:35 np0005539860 python3.9[45001]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:36 np0005539860 python3.9[45154]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 09:59:36 np0005539860 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Nov 29 09:59:36 np0005539860 systemd[1]: Stopped Apply Kernel Variables.
Nov 29 09:59:36 np0005539860 systemd[1]: Stopping Apply Kernel Variables...
Nov 29 09:59:36 np0005539860 systemd[1]: Starting Apply Kernel Variables...
Nov 29 09:59:36 np0005539860 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Nov 29 09:59:36 np0005539860 systemd[1]: Finished Apply Kernel Variables.
Nov 29 09:59:37 np0005539860 systemd[1]: session-9.scope: Deactivated successfully.
Nov 29 09:59:37 np0005539860 systemd[1]: session-9.scope: Consumed 2min 17.180s CPU time.
Nov 29 09:59:37 np0005539860 systemd-logind[794]: Session 9 logged out. Waiting for processes to exit.
Nov 29 09:59:37 np0005539860 systemd-logind[794]: Removed session 9.
Nov 29 09:59:42 np0005539860 systemd-logind[794]: New session 10 of user zuul.
Nov 29 09:59:42 np0005539860 systemd[1]: Started Session 10 of User zuul.
Nov 29 09:59:43 np0005539860 python3.9[45337]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:59:45 np0005539860 python3.9[45491]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:59:46 np0005539860 python3.9[45647]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:47 np0005539860 python3.9[45798]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 09:59:48 np0005539860 python3.9[45954]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 09:59:49 np0005539860 python3.9[46038]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 09:59:51 np0005539860 python3.9[46191]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 09:59:52 np0005539860 python3.9[46362]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:59:53 np0005539860 python3.9[46514]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 09:59:53 np0005539860 systemd[1]: var-lib-containers-storage-overlay-compat3254962560-merged.mount: Deactivated successfully.
Nov 29 09:59:53 np0005539860 podman[46515]: 2025-11-29 14:59:53.874778575 +0000 UTC m=+0.105790338 system refresh
Nov 29 09:59:54 np0005539860 python3.9[46675]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 09:59:54 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 09:59:55 np0005539860 python3.9[46798]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428394.0956776-109-238192654553367/.source.json follow=False _original_basename=podman_network_config.j2 checksum=d9ec098f4eb373c3127854b13aeaf03b341de38f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 09:59:56 np0005539860 python3.9[46950]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 09:59:57 np0005539860 python3.9[47073]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428395.8746493-124-29861812364936/.source.conf follow=False _original_basename=registries.conf.j2 checksum=a92d4bce7d9cad3a31d9a297b9e21f629ee446cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:59:58 np0005539860 python3.9[47225]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:59:58 np0005539860 python3.9[47377]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 09:59:59 np0005539860 python3.9[47529]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:00:00 np0005539860 python3.9[47681]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:00:01 np0005539860 python3.9[47831]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:00:01 np0005539860 python3.9[47985]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:03 np0005539860 python3.9[48138]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:06 np0005539860 python3.9[48298]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:08 np0005539860 python3.9[48451]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:11 np0005539860 python3.9[48604]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:13 np0005539860 python3.9[48760]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:16 np0005539860 python3.9[48930]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:18 np0005539860 python3.9[49083]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:33 np0005539860 python3.9[49420]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:00:35 np0005539860 python3.9[49576]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:00:36 np0005539860 python3.9[49751]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:00:37 np0005539860 python3.9[49874]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1764428436.220251-272-237640624035047/.source.json _original_basename=.0j6z8r5q follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:00:38 np0005539860 python3.9[50026]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:00:38 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:41 np0005539860 systemd[1]: var-lib-containers-storage-overlay-compat2971189651-lower\x2dmapped.mount: Deactivated successfully.
Nov 29 10:00:44 np0005539860 podman[50038]: 2025-11-29 15:00:44.56881909 +0000 UTC m=+5.812331878 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 10:00:44 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:44 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:44 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:45 np0005539860 python3.9[50337]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:00:45 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:55 np0005539860 podman[50350]: 2025-11-29 15:00:55.596931541 +0000 UTC m=+9.595809188 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 10:00:55 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:55 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:55 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:56 np0005539860 python3.9[50649]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:00:56 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:58 np0005539860 podman[50661]: 2025-11-29 15:00:58.154838594 +0000 UTC m=+1.401834098 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 10:00:58 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:58 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:58 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:00:59 np0005539860 python3.9[50897]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:00:59 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:11 np0005539860 podman[50909]: 2025-11-29 15:01:11.76403196 +0000 UTC m=+12.429490207 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 10:01:11 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:11 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:11 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:13 np0005539860 python3.9[51206]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:01:13 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:29 np0005539860 podman[51219]: 2025-11-29 15:01:29.387123396 +0000 UTC m=+16.240042516 image pull 4c40094793b487edb878e6f339e5974acc471f14f5a7d3266faecb44581a8770 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 29 10:01:29 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:29 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:29 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:30 np0005539860 python3.9[51544]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:01:31 np0005539860 podman[51557]: 2025-11-29 15:01:31.593420319 +0000 UTC m=+1.284146941 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 29 10:01:31 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:31 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:31 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:32 np0005539860 python3.9[51832]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:01:32 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:36 np0005539860 podman[51845]: 2025-11-29 15:01:36.090938591 +0000 UTC m=+3.343958085 image pull 743c1960518ee2a8df257b87dd40a31faa57a99c6d0aa394baae4cd418c3c2b2 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 29 10:01:36 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:36 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:36 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:37 np0005539860 python3.9[52099]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Nov 29 10:01:43 np0005539860 podman[52111]: 2025-11-29 15:01:43.141821044 +0000 UTC m=+6.047123283 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 29 10:01:43 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:43 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:43 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:01:43 np0005539860 systemd[1]: session-10.scope: Deactivated successfully.
Nov 29 10:01:43 np0005539860 systemd[1]: session-10.scope: Consumed 2min 30.105s CPU time.
Nov 29 10:01:43 np0005539860 systemd-logind[794]: Session 10 logged out. Waiting for processes to exit.
Nov 29 10:01:43 np0005539860 systemd-logind[794]: Removed session 10.
Nov 29 10:01:49 np0005539860 systemd-logind[794]: New session 11 of user zuul.
Nov 29 10:01:50 np0005539860 systemd[1]: Started Session 11 of User zuul.
Nov 29 10:01:51 np0005539860 python3.9[52515]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:01:52 np0005539860 python3.9[52671]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Nov 29 10:01:53 np0005539860 python3.9[52824]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 10:01:54 np0005539860 python3.9[52982]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 10:01:55 np0005539860 python3.9[53142]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:01:56 np0005539860 python3.9[53226]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:01:58 np0005539860 python3.9[53387]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:02:10 np0005539860 kernel: SELinux:  Converting 2731 SID table entries...
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 10:02:10 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 10:02:11 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Nov 29 10:02:11 np0005539860 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Nov 29 10:02:12 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 10:02:12 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 10:02:12 np0005539860 systemd[1]: Reloading.
Nov 29 10:02:13 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:02:13 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:02:13 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 10:02:13 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 10:02:13 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 10:02:13 np0005539860 systemd[1]: run-r06cc3a7c78254dfd8b0792d9b4bf7515.service: Deactivated successfully.
Nov 29 10:02:15 np0005539860 python3.9[54485]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:02:15 np0005539860 systemd[1]: Reloading.
Nov 29 10:02:15 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:02:15 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:02:15 np0005539860 systemd[1]: Starting Open vSwitch Database Unit...
Nov 29 10:02:15 np0005539860 chown[54527]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Nov 29 10:02:15 np0005539860 ovs-ctl[54532]: /etc/openvswitch/conf.db does not exist ... (warning).
Nov 29 10:02:15 np0005539860 ovs-ctl[54532]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Nov 29 10:02:15 np0005539860 ovs-ctl[54532]: Starting ovsdb-server [  OK  ]
Nov 29 10:02:15 np0005539860 ovs-vsctl[54581]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Nov 29 10:02:15 np0005539860 ovs-vsctl[54601]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Nov 29 10:02:15 np0005539860 ovs-ctl[54532]: Configuring Open vSwitch system IDs [  OK  ]
Nov 29 10:02:15 np0005539860 ovs-vsctl[54607]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 10:02:15 np0005539860 ovs-ctl[54532]: Enabling remote OVSDB managers [  OK  ]
Nov 29 10:02:15 np0005539860 systemd[1]: Started Open vSwitch Database Unit.
Nov 29 10:02:15 np0005539860 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Nov 29 10:02:15 np0005539860 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Nov 29 10:02:15 np0005539860 systemd[1]: Starting Open vSwitch Forwarding Unit...
Nov 29 10:02:15 np0005539860 kernel: openvswitch: Open vSwitch switching datapath
Nov 29 10:02:15 np0005539860 ovs-ctl[54651]: Inserting openvswitch module [  OK  ]
Nov 29 10:02:16 np0005539860 ovs-ctl[54620]: Starting ovs-vswitchd [  OK  ]
Nov 29 10:02:16 np0005539860 ovs-vsctl[54671]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Nov 29 10:02:16 np0005539860 ovs-ctl[54620]: Enabling remote OVSDB managers [  OK  ]
Nov 29 10:02:16 np0005539860 systemd[1]: Started Open vSwitch Forwarding Unit.
Nov 29 10:02:16 np0005539860 systemd[1]: Starting Open vSwitch...
Nov 29 10:02:16 np0005539860 systemd[1]: Finished Open vSwitch.
Nov 29 10:02:16 np0005539860 python3.9[54823]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:02:17 np0005539860 python3.9[54975]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Nov 29 10:02:19 np0005539860 kernel: SELinux:  Converting 2745 SID table entries...
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 10:02:19 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 10:02:20 np0005539860 python3.9[55130]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:02:21 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Nov 29 10:02:21 np0005539860 python3.9[55288]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:02:23 np0005539860 python3.9[55441]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:02:25 np0005539860 python3.9[55729]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 10:02:26 np0005539860 python3.9[55879]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:02:27 np0005539860 python3.9[56033]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:02:29 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 10:02:29 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 10:02:29 np0005539860 systemd[1]: Reloading.
Nov 29 10:02:29 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:02:29 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:02:29 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 10:02:29 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 10:02:29 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 10:02:29 np0005539860 systemd[1]: run-r0d1a47d739a74fa994d80ceb9dd69443.service: Deactivated successfully.
Nov 29 10:02:30 np0005539860 python3.9[56349]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:02:30 np0005539860 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Nov 29 10:02:30 np0005539860 systemd[1]: Stopped Network Manager Wait Online.
Nov 29 10:02:30 np0005539860 systemd[1]: Stopping Network Manager Wait Online...
Nov 29 10:02:30 np0005539860 systemd[1]: Stopping Network Manager...
Nov 29 10:02:30 np0005539860 NetworkManager[7177]: <info>  [1764428550.6417] caught SIGTERM, shutting down normally.
Nov 29 10:02:30 np0005539860 NetworkManager[7177]: <info>  [1764428550.6429] dhcp4 (eth0): canceled DHCP transaction
Nov 29 10:02:30 np0005539860 NetworkManager[7177]: <info>  [1764428550.6429] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 10:02:30 np0005539860 NetworkManager[7177]: <info>  [1764428550.6430] dhcp4 (eth0): state changed no lease
Nov 29 10:02:30 np0005539860 NetworkManager[7177]: <info>  [1764428550.6431] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 10:02:30 np0005539860 NetworkManager[7177]: <info>  [1764428550.6507] exiting (success)
Nov 29 10:02:30 np0005539860 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 10:02:30 np0005539860 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 10:02:30 np0005539860 systemd[1]: NetworkManager.service: Deactivated successfully.
Nov 29 10:02:30 np0005539860 systemd[1]: Stopped Network Manager.
Nov 29 10:02:30 np0005539860 systemd[1]: NetworkManager.service: Consumed 17.433s CPU time, 4.3M memory peak, read 0B from disk, written 17.5K to disk.
Nov 29 10:02:30 np0005539860 systemd[1]: Starting Network Manager...
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.7245] NetworkManager (version 1.54.1-1.el9) is starting... (after a restart, boot:8fdefcd0-656c-425f-85db-4aad72467491)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.7248] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.7315] manager[0x5587b38b8090]: monitoring kernel firmware directory '/lib/firmware'.
Nov 29 10:02:30 np0005539860 systemd[1]: Starting Hostname Service...
Nov 29 10:02:30 np0005539860 systemd[1]: Started Hostname Service.
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8179] hostname: hostname: using hostnamed
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8179] hostname: static hostname changed from (none) to "compute-0"
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8183] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8187] manager[0x5587b38b8090]: rfkill: Wi-Fi hardware radio set enabled
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8187] manager[0x5587b38b8090]: rfkill: WWAN hardware radio set enabled
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8205] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-ovs.so)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8213] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-device-plugin-team.so)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8214] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8214] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8214] manager: Networking is enabled by state file
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8216] settings: Loaded settings plugin: keyfile (internal)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8219] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.1-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8241] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8249] dhcp: init: Using DHCP client 'internal'
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8253] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8257] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8261] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8267] device (lo): Activation: starting connection 'lo' (960ffe02-1dfc-4f61-974b-5b08f23a4149)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8272] device (eth0): carrier: link connected
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8275] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8279] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8279] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8284] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8292] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8298] device (eth1): carrier: link connected
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8301] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8306] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (2bbbff6e-5d91-5d09-a38a-62b587d04722) (indicated)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8307] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8312] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8319] device (eth1): Activation: starting connection 'ci-private-network' (2bbbff6e-5d91-5d09-a38a-62b587d04722)
Nov 29 10:02:30 np0005539860 systemd[1]: Started Network Manager.
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8333] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8340] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8343] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8345] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8348] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8360] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8362] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8364] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8367] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8373] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8375] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8383] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8393] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8413] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8419] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Nov 29 10:02:30 np0005539860 systemd[1]: Starting Network Manager Wait Online...
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8485] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8490] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8494] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8500] device (lo): Activation: successful, device activated.
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8506] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8508] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8512] manager: NetworkManager state is now CONNECTED_LOCAL
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8515] device (eth1): Activation: successful, device activated.
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8569] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8573] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8579] manager: NetworkManager state is now CONNECTED_SITE
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8586] device (eth0): Activation: successful, device activated.
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8593] manager: NetworkManager state is now CONNECTED_GLOBAL
Nov 29 10:02:30 np0005539860 NetworkManager[56360]: <info>  [1764428550.8597] manager: startup complete
Nov 29 10:02:30 np0005539860 systemd[1]: Finished Network Manager Wait Online.
Nov 29 10:02:31 np0005539860 python3.9[56575]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:02:35 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 10:02:35 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 10:02:35 np0005539860 systemd[1]: Reloading.
Nov 29 10:02:36 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:02:36 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:02:36 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 10:02:36 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 10:02:36 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 10:02:36 np0005539860 systemd[1]: run-r07762aeb44784392a7a1f55a3c0f598b.service: Deactivated successfully.
Nov 29 10:02:38 np0005539860 python3.9[57034]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:02:39 np0005539860 python3.9[57186]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:40 np0005539860 python3.9[57340]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:40 np0005539860 python3.9[57492]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:40 np0005539860 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 10:02:41 np0005539860 python3.9[57644]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:42 np0005539860 python3.9[57796]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:43 np0005539860 python3.9[57948]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:02:44 np0005539860 python3.9[58071]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428562.7169359-229-272297354576381/.source _original_basename=.u5gubv4j follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:44 np0005539860 python3.9[58223]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:46 np0005539860 python3.9[58375]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Nov 29 10:02:46 np0005539860 python3.9[58527]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:02:49 np0005539860 python3.9[58954]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Nov 29 10:02:50 np0005539860 ansible-async_wrapper.py[59129]: Invoked with j952675313953 300 /home/zuul/.ansible/tmp/ansible-tmp-1764428569.516279-295-79684519859045/AnsiballZ_edpm_os_net_config.py _
Nov 29 10:02:50 np0005539860 ansible-async_wrapper.py[59132]: Starting module and watcher
Nov 29 10:02:50 np0005539860 ansible-async_wrapper.py[59132]: Start watching 59133 (300)
Nov 29 10:02:50 np0005539860 ansible-async_wrapper.py[59133]: Start module (59133)
Nov 29 10:02:50 np0005539860 ansible-async_wrapper.py[59129]: Return async_wrapper task started.
Nov 29 10:02:50 np0005539860 python3.9[59134]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Nov 29 10:02:51 np0005539860 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Nov 29 10:02:51 np0005539860 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Nov 29 10:02:51 np0005539860 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Nov 29 10:02:51 np0005539860 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Nov 29 10:02:51 np0005539860 kernel: cfg80211: failed to load regulatory.db
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.3547] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.3564] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4160] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4161] audit: op="connection-add" uuid="c3116e26-1722-42c8-aea8-860aed4c35bc" name="br-ex-br" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4175] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4176] audit: op="connection-add" uuid="1f2a1b0c-5846-4d89-8d8a-3c2c3a17b81d" name="br-ex-port" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4187] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4188] audit: op="connection-add" uuid="9ba7cba9-ba83-46ac-90e2-8d2b26da31e8" name="eth1-port" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4199] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4200] audit: op="connection-add" uuid="19485a6c-ca26-4172-8d7a-4ec1ce894d8c" name="vlan20-port" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4211] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4213] audit: op="connection-add" uuid="c7eb2a2c-0c0b-4fc1-b654-ad2ba0b92d4d" name="vlan21-port" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4222] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4224] audit: op="connection-add" uuid="98646367-626b-467b-b3ca-2c70a6552bb9" name="vlan22-port" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4241] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="connection.autoconnect-priority,connection.timestamp,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method,ipv6.dhcp-timeout" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4256] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4258] audit: op="connection-add" uuid="e9a8e3ca-8eaf-4a8a-bda1-fc254f2d52fa" name="br-ex-if" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4331] audit: op="connection-update" uuid="2bbbff6e-5d91-5d09-a38a-62b587d04722" name="ci-private-network" args="connection.slave-type,connection.controller,connection.master,connection.port-type,connection.timestamp,ipv4.method,ipv4.dns,ipv4.routes,ipv4.routing-rules,ipv4.addresses,ipv4.never-default,ovs-interface.type,ovs-external-ids.data,ipv6.addr-gen-mode,ipv6.dns,ipv6.routes,ipv6.method,ipv6.addresses,ipv6.routing-rules" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4345] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4348] audit: op="connection-add" uuid="410609c5-e572-48c7-989c-61dd68f2ec6c" name="vlan20-if" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4362] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4363] audit: op="connection-add" uuid="ce3800a3-bc22-4da8-8a47-ef47c7f08e54" name="vlan21-if" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4378] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4380] audit: op="connection-add" uuid="e7f2b169-8fc4-45ff-8dc9-c7b3579cddf3" name="vlan22-if" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4390] audit: op="connection-delete" uuid="f4a45717-a9d3-3a47-bdef-eed90f186bef" name="Wired connection 1" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4400] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4410] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4414] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (c3116e26-1722-42c8-aea8-860aed4c35bc)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4414] audit: op="connection-activate" uuid="c3116e26-1722-42c8-aea8-860aed4c35bc" name="br-ex-br" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4416] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4422] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4425] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (1f2a1b0c-5846-4d89-8d8a-3c2c3a17b81d)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4427] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4432] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4435] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (9ba7cba9-ba83-46ac-90e2-8d2b26da31e8)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4437] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4443] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4447] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (19485a6c-ca26-4172-8d7a-4ec1ce894d8c)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4449] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4454] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4458] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (c7eb2a2c-0c0b-4fc1-b654-ad2ba0b92d4d)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4460] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4465] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4469] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (98646367-626b-467b-b3ca-2c70a6552bb9)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4470] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4472] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4474] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4480] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4484] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4488] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (e9a8e3ca-8eaf-4a8a-bda1-fc254f2d52fa)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4489] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4492] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4494] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4495] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4497] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4506] device (eth1): disconnecting for new activation request.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4507] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4510] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4511] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4513] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4515] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4519] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4524] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (410609c5-e572-48c7-989c-61dd68f2ec6c)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4524] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4527] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4529] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4531] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4533] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4538] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4542] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (ce3800a3-bc22-4da8-8a47-ef47c7f08e54)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4543] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4545] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4548] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4549] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4552] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4556] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4560] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (e7f2b169-8fc4-45ff-8dc9-c7b3579cddf3)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4560] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4563] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4565] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4567] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4569] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4580] audit: op="device-reapply" interface="eth0" ifindex=2 args="connection.autoconnect-priority,ipv4.dhcp-client-id,ipv4.dhcp-timeout,802-3-ethernet.mtu,ipv6.addr-gen-mode,ipv6.method" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4582] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4585] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4587] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4592] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4596] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 kernel: ovs-system: entered promiscuous mode
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4609] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4611] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4613] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4617] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4620] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4623] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4624] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 systemd-udevd[59141]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4628] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4631] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4633] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4634] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4639] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4641] dhcp4 (eth0): canceled DHCP transaction
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4641] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4641] dhcp4 (eth0): state changed no lease
Nov 29 10:02:52 np0005539860 kernel: Timeout policy base is empty
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4643] dhcp4 (eth0): activation: beginning transaction (no timeout)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4653] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4656] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59135 uid=0 result="fail" reason="Device is not activated"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4690] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4694] dhcp4 (eth0): state changed new lease, address=38.102.83.64
Nov 29 10:02:52 np0005539860 systemd[1]: Starting Network Manager Script Dispatcher Service...
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4714] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4761] device (eth1): disconnecting for new activation request.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4761] audit: op="connection-activate" uuid="2bbbff6e-5d91-5d09-a38a-62b587d04722" name="ci-private-network" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4764] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4801] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59135 uid=0 result="success"
Nov 29 10:02:52 np0005539860 systemd[1]: Started Network Manager Script Dispatcher Service.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4823] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4921] device (eth1): Activation: starting connection 'ci-private-network' (2bbbff6e-5d91-5d09-a38a-62b587d04722)
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4928] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4931] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4937] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4938] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4939] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4939] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4940] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4941] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4950] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4955] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4958] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4960] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4963] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4965] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4967] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4969] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4972] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4975] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4978] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4981] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4984] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4988] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.4991] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5023] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5025] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5028] device (eth1): Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 kernel: br-ex: entered promiscuous mode
Nov 29 10:02:52 np0005539860 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5190] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Nov 29 10:02:52 np0005539860 kernel: vlan22: entered promiscuous mode
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5203] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5238] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5240] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5245] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 kernel: vlan21: entered promiscuous mode
Nov 29 10:02:52 np0005539860 systemd-udevd[59140]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5312] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5324] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 kernel: vlan20: entered promiscuous mode
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5392] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5393] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5398] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5438] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5448] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5467] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5470] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5475] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5539] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5549] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5566] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5567] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Nov 29 10:02:52 np0005539860 NetworkManager[56360]: <info>  [1764428572.5572] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Nov 29 10:02:53 np0005539860 NetworkManager[56360]: <info>  [1764428573.6571] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59135 uid=0 result="success"
Nov 29 10:02:53 np0005539860 NetworkManager[56360]: <info>  [1764428573.8501] checkpoint[0x5587b388f950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Nov 29 10:02:53 np0005539860 NetworkManager[56360]: <info>  [1764428573.8504] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.1668] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.1680] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 python3.9[59469]: ansible-ansible.legacy.async_status Invoked with jid=j952675313953.59129 mode=status _async_dir=/root/.ansible_async
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.3351] audit: op="networking-control" arg="global-dns-configuration" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.3381] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.3405] audit: op="networking-control" arg="global-dns-configuration" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.3426] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.4706] checkpoint[0x5587b388fa20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Nov 29 10:02:54 np0005539860 NetworkManager[56360]: <info>  [1764428574.4709] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59135 uid=0 result="success"
Nov 29 10:02:54 np0005539860 ansible-async_wrapper.py[59133]: Module complete (59133)
Nov 29 10:02:55 np0005539860 ansible-async_wrapper.py[59132]: Done in kid B.
Nov 29 10:02:57 np0005539860 python3.9[59573]: ansible-ansible.legacy.async_status Invoked with jid=j952675313953.59129 mode=status _async_dir=/root/.ansible_async
Nov 29 10:02:58 np0005539860 python3.9[59673]: ansible-ansible.legacy.async_status Invoked with jid=j952675313953.59129 mode=cleanup _async_dir=/root/.ansible_async
Nov 29 10:02:59 np0005539860 python3.9[59825]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:02:59 np0005539860 python3.9[59948]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428578.5873556-322-111732258204441/.source.returncode _original_basename=.4vnxs7iu follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:00 np0005539860 python3.9[60100]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:03:00 np0005539860 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 10:03:01 np0005539860 python3.9[60225]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428580.0353346-338-242123560234633/.source.cfg _original_basename=.u2ujnixh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:01 np0005539860 python3.9[60378]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:03:02 np0005539860 systemd[1]: Reloading Network Manager...
Nov 29 10:03:02 np0005539860 NetworkManager[56360]: <info>  [1764428582.0721] audit: op="reload" arg="0" pid=60382 uid=0 result="success"
Nov 29 10:03:02 np0005539860 NetworkManager[56360]: <info>  [1764428582.0734] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Nov 29 10:03:02 np0005539860 systemd[1]: Reloaded Network Manager.
Nov 29 10:03:02 np0005539860 systemd[1]: session-11.scope: Deactivated successfully.
Nov 29 10:03:02 np0005539860 systemd[1]: session-11.scope: Consumed 50.912s CPU time.
Nov 29 10:03:02 np0005539860 systemd-logind[794]: Session 11 logged out. Waiting for processes to exit.
Nov 29 10:03:02 np0005539860 systemd-logind[794]: Removed session 11.
Nov 29 10:03:08 np0005539860 systemd-logind[794]: New session 12 of user zuul.
Nov 29 10:03:08 np0005539860 systemd[1]: Started Session 12 of User zuul.
Nov 29 10:03:09 np0005539860 python3.9[60566]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:03:10 np0005539860 python3.9[60720]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:03:11 np0005539860 python3.9[60910]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:03:12 np0005539860 systemd[1]: session-12.scope: Deactivated successfully.
Nov 29 10:03:12 np0005539860 systemd[1]: session-12.scope: Consumed 2.302s CPU time.
Nov 29 10:03:12 np0005539860 systemd-logind[794]: Session 12 logged out. Waiting for processes to exit.
Nov 29 10:03:12 np0005539860 systemd-logind[794]: Removed session 12.
Nov 29 10:03:12 np0005539860 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Nov 29 10:03:17 np0005539860 systemd-logind[794]: New session 13 of user zuul.
Nov 29 10:03:17 np0005539860 systemd[1]: Started Session 13 of User zuul.
Nov 29 10:03:18 np0005539860 python3.9[61092]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:03:19 np0005539860 python3.9[61246]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:03:21 np0005539860 python3.9[61403]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:03:21 np0005539860 python3.9[61487]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:03:24 np0005539860 python3.9[61641]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:03:25 np0005539860 python3.9[61832]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:26 np0005539860 python3.9[61984]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:03:26 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:03:27 np0005539860 python3.9[62147]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:03:27 np0005539860 python3.9[62225]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:28 np0005539860 python3.9[62377]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:03:28 np0005539860 python3.9[62455]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:03:29 np0005539860 python3.9[62607]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:03:30 np0005539860 python3.9[62759]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:03:31 np0005539860 python3.9[62911]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:03:31 np0005539860 python3.9[63063]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:03:32 np0005539860 python3.9[63215]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:03:34 np0005539860 python3.9[63368]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:03:35 np0005539860 python3.9[63522]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:03:36 np0005539860 python3.9[63674]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:03:37 np0005539860 python3.9[63826]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:03:38 np0005539860 python3.9[63979]: ansible-service_facts Invoked
Nov 29 10:03:39 np0005539860 network[63996]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:03:39 np0005539860 network[63997]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:03:39 np0005539860 network[63998]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:03:44 np0005539860 python3.9[64450]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:03:47 np0005539860 python3.9[64603]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Nov 29 10:03:48 np0005539860 python3.9[64755]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:03:49 np0005539860 python3.9[64880]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428628.4160903-232-221658685292467/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:50 np0005539860 python3.9[65034]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:03:51 np0005539860 python3.9[65159]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428630.1227105-247-83315210345737/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:52 np0005539860 python3.9[65313]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:03:53 np0005539860 python3.9[65467]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:03:54 np0005539860 python3.9[65551]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:03:56 np0005539860 python3.9[65705]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:03:56 np0005539860 python3.9[65789]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:03:57 np0005539860 chronyd[802]: chronyd exiting
Nov 29 10:03:57 np0005539860 systemd[1]: Stopping NTP client/server...
Nov 29 10:03:57 np0005539860 systemd[1]: chronyd.service: Deactivated successfully.
Nov 29 10:03:57 np0005539860 systemd[1]: Stopped NTP client/server.
Nov 29 10:03:57 np0005539860 systemd[1]: Starting NTP client/server...
Nov 29 10:03:57 np0005539860 chronyd[65797]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Nov 29 10:03:57 np0005539860 chronyd[65797]: Frequency -26.097 +/- 0.129 ppm read from /var/lib/chrony/drift
Nov 29 10:03:57 np0005539860 chronyd[65797]: Loaded seccomp filter (level 2)
Nov 29 10:03:57 np0005539860 systemd[1]: Started NTP client/server.
Nov 29 10:03:57 np0005539860 systemd[1]: session-13.scope: Deactivated successfully.
Nov 29 10:03:57 np0005539860 systemd[1]: session-13.scope: Consumed 27.251s CPU time.
Nov 29 10:03:57 np0005539860 systemd-logind[794]: Session 13 logged out. Waiting for processes to exit.
Nov 29 10:03:57 np0005539860 systemd-logind[794]: Removed session 13.
Nov 29 10:04:02 np0005539860 systemd-logind[794]: New session 14 of user zuul.
Nov 29 10:04:02 np0005539860 systemd[1]: Started Session 14 of User zuul.
Nov 29 10:04:04 np0005539860 python3.9[65976]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:04:05 np0005539860 python3.9[66132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:06 np0005539860 python3.9[66307]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:06 np0005539860 python3.9[66385]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.5wznusig recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:07 np0005539860 python3.9[66537]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:08 np0005539860 python3.9[66660]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428647.2776618-61-38493551388530/.source _original_basename=.18ckg66s follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:09 np0005539860 python3.9[66812]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:04:10 np0005539860 python3.9[66964]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:11 np0005539860 python3.9[67087]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428649.8857024-85-194261261978263/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:04:11 np0005539860 python3.9[67239]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:12 np0005539860 python3.9[67362]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428651.3087456-85-133417642642309/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:04:13 np0005539860 python3.9[67514]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:13 np0005539860 python3.9[67666]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:14 np0005539860 python3.9[67789]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428653.4723308-122-140282022626894/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:15 np0005539860 python3.9[67941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:16 np0005539860 python3.9[68064]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428654.9025137-137-271271200628210/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:17 np0005539860 python3.9[68218]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:04:17 np0005539860 systemd[1]: Reloading.
Nov 29 10:04:17 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:04:17 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:04:17 np0005539860 systemd[1]: Reloading.
Nov 29 10:04:18 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:04:18 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:04:18 np0005539860 systemd[1]: Starting EDPM Container Shutdown...
Nov 29 10:04:18 np0005539860 systemd[1]: Finished EDPM Container Shutdown.
Nov 29 10:04:19 np0005539860 python3.9[68445]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:19 np0005539860 python3.9[68568]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428658.5273283-160-197727408824104/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:20 np0005539860 python3.9[68720]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:20 np0005539860 python3.9[68843]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428659.9276466-175-139466046682742/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:21 np0005539860 python3.9[68995]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:04:21 np0005539860 systemd[1]: Reloading.
Nov 29 10:04:21 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:04:21 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:04:22 np0005539860 systemd[1]: Reloading.
Nov 29 10:04:22 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:04:22 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:04:22 np0005539860 systemd[1]: Starting Create netns directory...
Nov 29 10:04:22 np0005539860 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 10:04:22 np0005539860 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 10:04:22 np0005539860 systemd[1]: Finished Create netns directory.
Nov 29 10:04:23 np0005539860 python3.9[69220]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:04:23 np0005539860 network[69237]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:04:23 np0005539860 network[69238]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:04:23 np0005539860 network[69239]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:04:27 np0005539860 python3.9[69503]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:04:27 np0005539860 systemd[1]: Reloading.
Nov 29 10:04:27 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:04:27 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:04:27 np0005539860 systemd[1]: Stopping IPv4 firewall with iptables...
Nov 29 10:04:28 np0005539860 iptables.init[69544]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Nov 29 10:04:28 np0005539860 iptables.init[69544]: iptables: Flushing firewall rules: [  OK  ]
Nov 29 10:04:28 np0005539860 systemd[1]: iptables.service: Deactivated successfully.
Nov 29 10:04:28 np0005539860 systemd[1]: Stopped IPv4 firewall with iptables.
Nov 29 10:04:29 np0005539860 python3.9[69742]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:04:30 np0005539860 python3.9[69896]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:04:30 np0005539860 systemd[1]: Reloading.
Nov 29 10:04:30 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:04:30 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:04:30 np0005539860 systemd[1]: Starting Netfilter Tables...
Nov 29 10:04:30 np0005539860 systemd[1]: Finished Netfilter Tables.
Nov 29 10:04:31 np0005539860 python3.9[70088]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:04:32 np0005539860 python3.9[70241]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:34 np0005539860 python3.9[70366]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428671.92628-244-215605851442374/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:35 np0005539860 python3.9[70519]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:04:35 np0005539860 systemd[1]: Reloading OpenSSH server daemon...
Nov 29 10:04:35 np0005539860 systemd[1]: Reloaded OpenSSH server daemon.
Nov 29 10:04:36 np0005539860 python3.9[70675]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:36 np0005539860 python3.9[70827]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:37 np0005539860 python3.9[70950]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428676.3743138-275-167369930529654/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:38 np0005539860 python3.9[71102]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Nov 29 10:04:38 np0005539860 systemd[1]: Starting Time & Date Service...
Nov 29 10:04:38 np0005539860 systemd[1]: Started Time & Date Service.
Nov 29 10:04:39 np0005539860 python3.9[71258]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:40 np0005539860 python3.9[71410]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:40 np0005539860 python3.9[71533]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428679.926337-310-33007362485951/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:41 np0005539860 python3.9[71685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:42 np0005539860 python3.9[71808]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428681.1846087-325-12594334730497/.source.yaml _original_basename=.gg_ib_wr follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:43 np0005539860 python3.9[71960]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:43 np0005539860 python3.9[72083]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428682.551607-340-46860269370965/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:44 np0005539860 python3.9[72235]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:04:45 np0005539860 python3.9[72388]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:04:46 np0005539860 python3[72541]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 10:04:47 np0005539860 python3.9[72693]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:47 np0005539860 python3.9[72816]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428686.630905-379-45574772532133/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:48 np0005539860 python3.9[72968]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:49 np0005539860 python3.9[73091]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428688.0877297-394-212489441419741/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:50 np0005539860 python3.9[73243]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:50 np0005539860 python3.9[73366]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428689.587806-409-123507295381522/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:51 np0005539860 python3.9[73518]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:52 np0005539860 python3.9[73641]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428690.9400795-424-124096787627370/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:52 np0005539860 python3.9[73793]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:04:53 np0005539860 python3.9[73916]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428692.3136654-439-95460483940540/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:54 np0005539860 python3.9[74068]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:55 np0005539860 python3.9[74220]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:04:56 np0005539860 python3.9[74379]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:56 np0005539860 python3.9[74532]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:57 np0005539860 python3.9[74684]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:04:58 np0005539860 python3.9[74836]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 10:04:58 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 10:04:59 np0005539860 python3.9[74990]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Nov 29 10:04:59 np0005539860 systemd[1]: session-14.scope: Deactivated successfully.
Nov 29 10:04:59 np0005539860 systemd[1]: session-14.scope: Consumed 39.810s CPU time.
Nov 29 10:04:59 np0005539860 systemd-logind[794]: Session 14 logged out. Waiting for processes to exit.
Nov 29 10:04:59 np0005539860 systemd-logind[794]: Removed session 14.
Nov 29 10:05:05 np0005539860 systemd-logind[794]: New session 15 of user zuul.
Nov 29 10:05:05 np0005539860 systemd[1]: Started Session 15 of User zuul.
Nov 29 10:05:06 np0005539860 python3.9[75171]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Nov 29 10:05:07 np0005539860 python3.9[75323]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:05:08 np0005539860 python3.9[75475]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:05:08 np0005539860 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 10:05:09 np0005539860 python3.9[75629]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCsEaepTE6ZnTVzU9vgUrANOf8dQWYhhEv6SNjfiiMIrDzMS/tcgDRZg3OvZ4uTikwb6v2IOhE1pL5vbJI5HxCsO86WOgAte/2X5AC+ohyfrxx4OkTGNlxDvM6Nfp90xPhMSdpU3f1RBQrp0FEYeHBwNAPbQL0DluaF8pWLav7vj0i0p1pyxHEvS4AoIrHJoLXZhWCiLVj90xM8hQPHeD12qFHCWUSZJEdO3/hLqVgHCezTCi6/UYcrHHJ+wEplpLAaZimtrwcmGs8/IBSPMVVfZEmbNckKr1gkLQu3sORu8d55vU8GqDa0A+iGwC+zrYKb2He+JW9HN7OOI86Two3xy3mf8uRrKYRa27nYPlk55rHJQyXn7dAATOmSQAW8+vLfmylBW3FZM5W02JbDC8H1lMZtd2gLHc2zmTMjC9qIIkjbRNpLIpxhItkjZY1tuPpibkN3ni9ASrM2s/fQXsFEzOBTgw34QlJOHrk8KZ3meq2cQU0Oq1eCc/4C0QF1l/M=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKILtF1pIqBGn+593ka/4UxwAf2ULA0oodZlyGx73gd#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJhHw2QW6M+R8yED9uxy5nGjTMIt8a9QQRtWjRB8VjIUuoLaf9XKLcdnvpALG22hu+uG3g9FWqgXg1sguMIxS9Y=#012 create=True mode=0644 path=/tmp/ansible.lhvksznq state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:10 np0005539860 python3.9[75781]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.lhvksznq' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:05:11 np0005539860 python3.9[75935]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.lhvksznq state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:11 np0005539860 systemd[1]: session-15.scope: Deactivated successfully.
Nov 29 10:05:11 np0005539860 systemd[1]: session-15.scope: Consumed 3.833s CPU time.
Nov 29 10:05:11 np0005539860 systemd-logind[794]: Session 15 logged out. Waiting for processes to exit.
Nov 29 10:05:11 np0005539860 systemd-logind[794]: Removed session 15.
Nov 29 10:05:18 np0005539860 systemd-logind[794]: New session 16 of user zuul.
Nov 29 10:05:18 np0005539860 systemd[1]: Started Session 16 of User zuul.
Nov 29 10:05:19 np0005539860 python3.9[76114]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:05:20 np0005539860 python3.9[76270]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 10:05:21 np0005539860 python3.9[76424]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:05:22 np0005539860 python3.9[76577]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:05:23 np0005539860 python3.9[76730]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:05:24 np0005539860 python3.9[76884]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:05:25 np0005539860 python3.9[77039]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:25 np0005539860 systemd[1]: session-16.scope: Deactivated successfully.
Nov 29 10:05:25 np0005539860 systemd[1]: session-16.scope: Consumed 5.235s CPU time.
Nov 29 10:05:25 np0005539860 systemd-logind[794]: Session 16 logged out. Waiting for processes to exit.
Nov 29 10:05:25 np0005539860 systemd-logind[794]: Removed session 16.
Nov 29 10:05:32 np0005539860 systemd-logind[794]: New session 17 of user zuul.
Nov 29 10:05:32 np0005539860 systemd[1]: Started Session 17 of User zuul.
Nov 29 10:05:33 np0005539860 python3.9[77217]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:05:34 np0005539860 python3.9[77373]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:05:35 np0005539860 python3.9[77457]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Nov 29 10:05:37 np0005539860 python3.9[77608]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:05:38 np0005539860 python3.9[77759]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:05:39 np0005539860 python3.9[77909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:05:40 np0005539860 python3.9[78059]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:05:40 np0005539860 systemd[1]: session-17.scope: Deactivated successfully.
Nov 29 10:05:40 np0005539860 systemd[1]: session-17.scope: Consumed 6.111s CPU time.
Nov 29 10:05:40 np0005539860 systemd-logind[794]: Session 17 logged out. Waiting for processes to exit.
Nov 29 10:05:40 np0005539860 systemd-logind[794]: Removed session 17.
Nov 29 10:05:46 np0005539860 systemd-logind[794]: New session 18 of user zuul.
Nov 29 10:05:46 np0005539860 systemd[1]: Started Session 18 of User zuul.
Nov 29 10:05:47 np0005539860 python3.9[78237]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:05:49 np0005539860 python3.9[78393]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:05:50 np0005539860 python3.9[78545]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:05:51 np0005539860 python3.9[78697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:05:52 np0005539860 python3.9[78820]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428750.5241165-65-66131645182815/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=0cb64fd00c59f68e61bd1f5470f064fb655139cc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:52 np0005539860 python3.9[78972]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:05:53 np0005539860 python3.9[79095]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428752.2507236-65-70319769885629/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=67fc1207e99e3928f4709be78ec7d7a66ea0165d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:54 np0005539860 python3.9[79247]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:05:54 np0005539860 python3.9[79370]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428753.5663342-65-95393934167182/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=6d51400572a7d723aece20d58c918f8291512d78 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:55 np0005539860 python3.9[79522]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:05:55 np0005539860 python3.9[79674]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:05:56 np0005539860 python3.9[79826]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:05:57 np0005539860 python3.9[79949]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428756.2163804-124-91945471226774/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=73b9490dd60ceb6ba7efab322d8d059f8d2ea137 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:58 np0005539860 python3.9[80101]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:05:58 np0005539860 python3.9[80224]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428757.538583-124-123281549927637/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=67fc1207e99e3928f4709be78ec7d7a66ea0165d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:05:59 np0005539860 python3.9[80376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:00 np0005539860 python3.9[80499]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428759.0346231-124-225694421466193/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=a6e56b234886a508321990ce354d9e9547e5d25e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:01 np0005539860 python3.9[80651]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:01 np0005539860 python3.9[80803]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:02 np0005539860 python3.9[80955]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:03 np0005539860 python3.9[81078]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428762.0041752-183-64018330844643/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=27dcd4770b968caef8f014e4ecf360ec1863c1ad backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:03 np0005539860 python3.9[81230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:04 np0005539860 python3.9[81353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428763.3769774-183-31535191167517/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=aa7ab4c9c6d1770c30b6697ef92733b9ab5cb382 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:05 np0005539860 python3.9[81505]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:05 np0005539860 python3.9[81628]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428764.7962954-183-92730708878519/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=8439a1b0a78320655fa5f81584ed600d540cf354 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:06 np0005539860 chronyd[65797]: Selected source 174.138.193.90 (pool.ntp.org)
Nov 29 10:06:06 np0005539860 python3.9[81780]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:07 np0005539860 python3.9[81932]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:08 np0005539860 python3.9[82084]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:08 np0005539860 python3.9[82207]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428767.5912569-242-130403983365848/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b968f13208d7764160918d6f1caef52fe4113627 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:09 np0005539860 python3.9[82359]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:09 np0005539860 python3.9[82482]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428768.8746696-242-231521195899990/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=02ec2d7a734f8bc47f5c3f195401405d2f394294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:10 np0005539860 python3.9[82634]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:11 np0005539860 python3.9[82757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428770.113066-242-276770629396492/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4db1e991dd590e5dc458f183602ce94bc92029e0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:12 np0005539860 python3.9[82909]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:12 np0005539860 python3.9[83061]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:13 np0005539860 python3.9[83213]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:14 np0005539860 python3.9[83336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428773.2024314-301-268189461232689/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b7cae8cf0323072d50064eeaf243d300ed5514c4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:15 np0005539860 python3.9[83488]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:15 np0005539860 python3.9[83611]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428774.6278791-301-50528873675053/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=aa7ab4c9c6d1770c30b6697ef92733b9ab5cb382 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:16 np0005539860 python3.9[83763]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:17 np0005539860 python3.9[83886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428775.994293-301-230083145116064/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=2b5661285db4c937e0e376b4dab4f72b7f42783a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:18 np0005539860 python3.9[84038]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:19 np0005539860 python3.9[84190]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:20 np0005539860 python3.9[84313]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428778.9104033-369-266664473849962/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:21 np0005539860 python3.9[84465]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:21 np0005539860 python3.9[84617]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:22 np0005539860 python3.9[84740]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428781.2445643-393-244978475769651/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:23 np0005539860 python3.9[84892]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:23 np0005539860 python3.9[85044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:24 np0005539860 python3.9[85167]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428783.480221-417-8283047903265/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:25 np0005539860 python3.9[85319]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:26 np0005539860 python3.9[85471]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:26 np0005539860 python3.9[85594]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428785.6668868-441-178114941145525/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:27 np0005539860 python3.9[85746]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:28 np0005539860 python3.9[85898]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:29 np0005539860 python3.9[86021]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428787.9129965-465-110813960595878/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:29 np0005539860 python3.9[86173]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:30 np0005539860 python3.9[86325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:31 np0005539860 python3.9[86448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428789.9702508-489-131813710020311/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:31 np0005539860 python3.9[86600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:32 np0005539860 python3.9[86752]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:33 np0005539860 python3.9[86875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428792.1843235-513-55691108540906/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:34 np0005539860 python3.9[87027]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:35 np0005539860 python3.9[87179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:35 np0005539860 python3.9[87302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428794.48749-537-106588665208190/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=6b29adeeedb2443a351481a01378704e187007d2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:36 np0005539860 systemd[1]: session-18.scope: Deactivated successfully.
Nov 29 10:06:36 np0005539860 systemd[1]: session-18.scope: Consumed 38.203s CPU time.
Nov 29 10:06:36 np0005539860 systemd-logind[794]: Session 18 logged out. Waiting for processes to exit.
Nov 29 10:06:36 np0005539860 systemd-logind[794]: Removed session 18.
Nov 29 10:06:42 np0005539860 systemd-logind[794]: New session 19 of user zuul.
Nov 29 10:06:42 np0005539860 systemd[1]: Started Session 19 of User zuul.
Nov 29 10:06:43 np0005539860 python3.9[87480]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:06:44 np0005539860 python3.9[87636]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:45 np0005539860 python3.9[87788]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:06:46 np0005539860 python3.9[87938]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:06:48 np0005539860 python3.9[88090]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 10:06:50 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Nov 29 10:06:50 np0005539860 python3.9[88246]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:06:51 np0005539860 python3.9[88330]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:06:53 np0005539860 python3.9[88483]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:06:55 np0005539860 python3[88638]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Nov 29 10:06:56 np0005539860 python3.9[88790]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:57 np0005539860 python3.9[88942]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:58 np0005539860 python3.9[89020]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:06:59 np0005539860 python3.9[89172]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:06:59 np0005539860 python3.9[89250]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.w4zkmoqu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:00 np0005539860 python3.9[89402]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:01 np0005539860 python3.9[89480]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:02 np0005539860 python3.9[89632]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:03 np0005539860 python3[89785]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 10:07:04 np0005539860 python3.9[89937]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:05 np0005539860 python3.9[90062]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428823.558405-157-127959050309173/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:05 np0005539860 python3.9[90214]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:06 np0005539860 python3.9[90339]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428825.327538-172-271461474733266/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:07 np0005539860 python3.9[90491]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:08 np0005539860 python3.9[90616]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428826.884799-187-190240861369486/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:09 np0005539860 python3.9[90768]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:09 np0005539860 python3.9[90893]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428828.6221275-202-206315543774172/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:10 np0005539860 python3.9[91045]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:11 np0005539860 python3.9[91170]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764428830.05689-217-66029899130259/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:12 np0005539860 python3.9[91322]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:13 np0005539860 python3.9[91474]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:14 np0005539860 python3.9[91629]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:14 np0005539860 python3.9[91781]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:15 np0005539860 python3.9[91934]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:07:16 np0005539860 python3.9[92088]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:17 np0005539860 python3.9[92243]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:18 np0005539860 python3.9[92393]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:07:19 np0005539860 python3.9[92546]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:c6:22:5a:f7" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:19 np0005539860 ovs-vsctl[92547]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:c6:22:5a:f7 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Nov 29 10:07:20 np0005539860 python3.9[92699]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:21 np0005539860 python3.9[92854]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:21 np0005539860 ovs-vsctl[92855]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Nov 29 10:07:21 np0005539860 python3.9[93005]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:07:22 np0005539860 python3.9[93159]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:07:23 np0005539860 python3.9[93311]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:23 np0005539860 python3.9[93389]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:07:24 np0005539860 python3.9[93541]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:25 np0005539860 python3.9[93619]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:07:25 np0005539860 python3.9[93771]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:26 np0005539860 python3.9[93923]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:26 np0005539860 python3.9[94001]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:27 np0005539860 python3.9[94153]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:28 np0005539860 python3.9[94231]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:28 np0005539860 python3.9[94383]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:07:28 np0005539860 systemd[1]: Reloading.
Nov 29 10:07:29 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:07:29 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:07:29 np0005539860 python3.9[94571]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:30 np0005539860 python3.9[94649]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:31 np0005539860 python3.9[94801]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:31 np0005539860 python3.9[94879]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:32 np0005539860 python3.9[95031]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:07:32 np0005539860 systemd[1]: Reloading.
Nov 29 10:07:32 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:07:32 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:07:33 np0005539860 systemd[1]: Starting Create netns directory...
Nov 29 10:07:33 np0005539860 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 10:07:33 np0005539860 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 10:07:33 np0005539860 systemd[1]: Finished Create netns directory.
Nov 29 10:07:34 np0005539860 python3.9[95228]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:07:35 np0005539860 python3.9[95381]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:36 np0005539860 python3.9[95504]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428854.8988082-468-71308103505480/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:07:37 np0005539860 python3.9[95656]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:07:38 np0005539860 python3.9[95808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:07:39 np0005539860 python3.9[95931]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428857.7150679-493-67746181353373/.source.json _original_basename=.sztxf1x7 follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:39 np0005539860 python3.9[96083]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:42 np0005539860 python3.9[96510]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Nov 29 10:07:43 np0005539860 python3.9[96662]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:07:44 np0005539860 python3.9[96814]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 10:07:44 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:07:46 np0005539860 python3[96977]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:07:46 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:07:46 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:07:46 np0005539860 podman[97010]: 2025-11-29 15:07:46.620976652 +0000 UTC m=+0.020202369 image pull 52cb1910f3f090372807028d1c2aea98d2557b1086636469529f290368ecdf69 quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 10:07:46 np0005539860 podman[97010]: 2025-11-29 15:07:46.910847931 +0000 UTC m=+0.310073648 container create c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 10:07:46 np0005539860 python3[96977]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Nov 29 10:07:47 np0005539860 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Nov 29 10:07:47 np0005539860 python3.9[97201]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:07:48 np0005539860 python3.9[97355]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:49 np0005539860 python3.9[97431]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:07:50 np0005539860 python3.9[97582]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764428869.4396074-581-154895613175707/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:07:50 np0005539860 python3.9[97658]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:07:50 np0005539860 systemd[1]: Reloading.
Nov 29 10:07:50 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:07:50 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:07:51 np0005539860 python3.9[97770]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:07:52 np0005539860 systemd[1]: Reloading.
Nov 29 10:07:52 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:07:52 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:07:53 np0005539860 systemd[1]: Starting ovn_controller container...
Nov 29 10:07:53 np0005539860 systemd[1]: Created slice Virtual Machine and Container Slice.
Nov 29 10:07:53 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:07:53 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/224b3edb4d5fb10cb0bf1fcc8a0b918d7f27b3889beeb3ff9abeb936fd8a1de1/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 10:07:53 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.
Nov 29 10:07:53 np0005539860 podman[97811]: 2025-11-29 15:07:53.293835544 +0000 UTC m=+0.148755689 container init c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + sudo -E kolla_set_configs
Nov 29 10:07:53 np0005539860 podman[97811]: 2025-11-29 15:07:53.322357547 +0000 UTC m=+0.177277692 container start c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:07:53 np0005539860 edpm-start-podman-container[97811]: ovn_controller
Nov 29 10:07:53 np0005539860 systemd[1]: Created slice User Slice of UID 0.
Nov 29 10:07:53 np0005539860 systemd[1]: Starting User Runtime Directory /run/user/0...
Nov 29 10:07:53 np0005539860 systemd[1]: Finished User Runtime Directory /run/user/0.
Nov 29 10:07:53 np0005539860 systemd[1]: Starting User Manager for UID 0...
Nov 29 10:07:53 np0005539860 edpm-start-podman-container[97810]: Creating additional drop-in dependency for "ovn_controller" (c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b)
Nov 29 10:07:53 np0005539860 podman[97834]: 2025-11-29 15:07:53.412000242 +0000 UTC m=+0.072844713 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 10:07:53 np0005539860 systemd[1]: c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b-693216ac8ddc7983.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:07:53 np0005539860 systemd[1]: c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b-693216ac8ddc7983.service: Failed with result 'exit-code'.
Nov 29 10:07:53 np0005539860 systemd[1]: Reloading.
Nov 29 10:07:53 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:07:53 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:07:53 np0005539860 systemd[97866]: Queued start job for default target Main User Target.
Nov 29 10:07:53 np0005539860 systemd[97866]: Created slice User Application Slice.
Nov 29 10:07:53 np0005539860 systemd[97866]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Nov 29 10:07:53 np0005539860 systemd[97866]: Started Daily Cleanup of User's Temporary Directories.
Nov 29 10:07:53 np0005539860 systemd[97866]: Reached target Paths.
Nov 29 10:07:53 np0005539860 systemd[97866]: Reached target Timers.
Nov 29 10:07:53 np0005539860 systemd[97866]: Starting D-Bus User Message Bus Socket...
Nov 29 10:07:53 np0005539860 systemd[97866]: Starting Create User's Volatile Files and Directories...
Nov 29 10:07:53 np0005539860 systemd[97866]: Finished Create User's Volatile Files and Directories.
Nov 29 10:07:53 np0005539860 systemd[97866]: Listening on D-Bus User Message Bus Socket.
Nov 29 10:07:53 np0005539860 systemd[97866]: Reached target Sockets.
Nov 29 10:07:53 np0005539860 systemd[97866]: Reached target Basic System.
Nov 29 10:07:53 np0005539860 systemd[97866]: Reached target Main User Target.
Nov 29 10:07:53 np0005539860 systemd[97866]: Startup finished in 135ms.
Nov 29 10:07:53 np0005539860 systemd[1]: Started User Manager for UID 0.
Nov 29 10:07:53 np0005539860 systemd[1]: Started ovn_controller container.
Nov 29 10:07:53 np0005539860 systemd[1]: Started Session c1 of User root.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: INFO:__main__:Validating config file
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: INFO:__main__:Writing out command to execute
Nov 29 10:07:53 np0005539860 systemd[1]: session-c1.scope: Deactivated successfully.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: ++ cat /run_command
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + ARGS=
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + sudo kolla_copy_cacerts
Nov 29 10:07:53 np0005539860 systemd[1]: Started Session c2 of User root.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + [[ ! -n '' ]]
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + . kolla_extend_start
Nov 29 10:07:53 np0005539860 systemd[1]: session-c2.scope: Deactivated successfully.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + umask 0022
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00003|main|INFO|OVN internal version is : [24.03.7-20.33.0-76.8]
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9180] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9191] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9208] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9215] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9221] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Nov 29 10:07:53 np0005539860 kernel: br-int: entered promiscuous mode
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00014|main|INFO|OVS feature set changed, force recompute.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00018|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00019|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00020|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00021|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00022|main|INFO|OVS feature set changed, force recompute.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 10:07:53 np0005539860 ovn_controller[97827]: 2025-11-29T15:07:53Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9441] manager: (ovn-00c02a-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Nov 29 10:07:53 np0005539860 systemd-udevd[97983]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 10:07:53 np0005539860 kernel: genev_sys_6081: entered promiscuous mode
Nov 29 10:07:53 np0005539860 systemd-udevd[97984]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9725] device (genev_sys_6081): carrier: link connected
Nov 29 10:07:53 np0005539860 NetworkManager[56360]: <info>  [1764428873.9728] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Nov 29 10:07:54 np0005539860 python3.9[98092]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:54 np0005539860 ovs-vsctl[98093]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Nov 29 10:07:55 np0005539860 python3.9[98245]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:55 np0005539860 ovs-vsctl[98247]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Nov 29 10:07:56 np0005539860 python3.9[98400]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:07:56 np0005539860 ovs-vsctl[98401]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Nov 29 10:07:56 np0005539860 systemd[1]: session-19.scope: Deactivated successfully.
Nov 29 10:07:56 np0005539860 systemd[1]: session-19.scope: Consumed 51.515s CPU time.
Nov 29 10:07:56 np0005539860 systemd-logind[794]: Session 19 logged out. Waiting for processes to exit.
Nov 29 10:07:56 np0005539860 systemd-logind[794]: Removed session 19.
Nov 29 10:08:02 np0005539860 systemd-logind[794]: New session 21 of user zuul.
Nov 29 10:08:02 np0005539860 systemd[1]: Started Session 21 of User zuul.
Nov 29 10:08:03 np0005539860 python3.9[98579]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:08:03 np0005539860 systemd[1]: Stopping User Manager for UID 0...
Nov 29 10:08:04 np0005539860 systemd[97866]: Activating special unit Exit the Session...
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped target Main User Target.
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped target Basic System.
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped target Paths.
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped target Sockets.
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped target Timers.
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped Daily Cleanup of User's Temporary Directories.
Nov 29 10:08:04 np0005539860 systemd[97866]: Closed D-Bus User Message Bus Socket.
Nov 29 10:08:04 np0005539860 systemd[97866]: Stopped Create User's Volatile Files and Directories.
Nov 29 10:08:04 np0005539860 systemd[97866]: Removed slice User Application Slice.
Nov 29 10:08:04 np0005539860 systemd[97866]: Reached target Shutdown.
Nov 29 10:08:04 np0005539860 systemd[97866]: Finished Exit the Session.
Nov 29 10:08:04 np0005539860 systemd[97866]: Reached target Exit the Session.
Nov 29 10:08:04 np0005539860 systemd[1]: user@0.service: Deactivated successfully.
Nov 29 10:08:04 np0005539860 systemd[1]: Stopped User Manager for UID 0.
Nov 29 10:08:04 np0005539860 systemd[1]: Stopping User Runtime Directory /run/user/0...
Nov 29 10:08:04 np0005539860 systemd[1]: run-user-0.mount: Deactivated successfully.
Nov 29 10:08:04 np0005539860 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Nov 29 10:08:04 np0005539860 systemd[1]: Stopped User Runtime Directory /run/user/0.
Nov 29 10:08:04 np0005539860 systemd[1]: Removed slice User Slice of UID 0.
Nov 29 10:08:05 np0005539860 python3.9[98739]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:05 np0005539860 python3.9[98891]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:06 np0005539860 python3.9[99043]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:07 np0005539860 python3.9[99195]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:08 np0005539860 python3.9[99347]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:09 np0005539860 python3.9[99497]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:08:10 np0005539860 python3.9[99649]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Nov 29 10:08:11 np0005539860 python3.9[99799]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:12 np0005539860 python3.9[99921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428890.9165466-86-56994304643593/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:13 np0005539860 python3.9[100071]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:14 np0005539860 python3.9[100192]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428892.7988994-101-70802127627745/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:15 np0005539860 python3.9[100344]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:08:16 np0005539860 python3.9[100428]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:08:18 np0005539860 python3.9[100581]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:08:19 np0005539860 python3.9[100734]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:20 np0005539860 python3.9[100855]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428898.9849358-138-187376702327734/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:20 np0005539860 python3.9[101005]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:21 np0005539860 python3.9[101126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428900.2450466-138-169370106745860/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:22 np0005539860 python3.9[101276]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:23 np0005539860 ovn_controller[97827]: 2025-11-29T15:08:23Z|00025|memory|INFO|16384 kB peak resident set size after 29.7 seconds
Nov 29 10:08:23 np0005539860 ovn_controller[97827]: 2025-11-29T15:08:23Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Nov 29 10:08:23 np0005539860 podman[101371]: 2025-11-29 15:08:23.632286353 +0000 UTC m=+0.159735041 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 10:08:23 np0005539860 python3.9[101407]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428902.2938197-182-144808827933498/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:24 np0005539860 python3.9[101574]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:25 np0005539860 python3.9[101695]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428903.9256907-182-235127918716209/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:25 np0005539860 python3.9[101845]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:08:26 np0005539860 python3.9[101999]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:27 np0005539860 python3.9[102151]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:28 np0005539860 python3.9[102229]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:29 np0005539860 python3.9[102381]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:29 np0005539860 python3.9[102459]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:30 np0005539860 python3.9[102611]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:31 np0005539860 python3.9[102763]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:31 np0005539860 python3.9[102841]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:32 np0005539860 python3.9[102993]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:33 np0005539860 python3.9[103071]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:34 np0005539860 python3.9[103223]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:08:34 np0005539860 systemd[1]: Reloading.
Nov 29 10:08:34 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:08:34 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:08:35 np0005539860 python3.9[103412]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:35 np0005539860 python3.9[103490]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:36 np0005539860 python3.9[103642]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:37 np0005539860 python3.9[103720]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:38 np0005539860 python3.9[103872]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:08:38 np0005539860 systemd[1]: Reloading.
Nov 29 10:08:38 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:08:38 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:08:38 np0005539860 systemd[1]: Starting Create netns directory...
Nov 29 10:08:38 np0005539860 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 10:08:38 np0005539860 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 10:08:38 np0005539860 systemd[1]: Finished Create netns directory.
Nov 29 10:08:39 np0005539860 python3.9[104065]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:40 np0005539860 python3.9[104217]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:41 np0005539860 python3.9[104340]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764428920.0100274-333-26912109410825/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:42 np0005539860 python3.9[104492]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:08:43 np0005539860 python3.9[104644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:08:44 np0005539860 python3.9[104767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764428922.7866635-358-64674053526463/.source.json _original_basename=.t0tikrtg follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:44 np0005539860 python3.9[104919]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:47 np0005539860 python3.9[105346]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Nov 29 10:08:48 np0005539860 python3.9[105498]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:08:49 np0005539860 python3.9[105650]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 10:08:50 np0005539860 python3[105828]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:08:51 np0005539860 podman[105865]: 2025-11-29 15:08:51.249675917 +0000 UTC m=+0.064935986 container create 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:08:51 np0005539860 podman[105865]: 2025-11-29 15:08:51.213244615 +0000 UTC m=+0.028504774 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 10:08:51 np0005539860 python3[105828]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 10:08:52 np0005539860 python3.9[106055]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:08:52 np0005539860 python3.9[106209]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:53 np0005539860 python3.9[106285]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:08:54 np0005539860 podman[106408]: 2025-11-29 15:08:54.211489528 +0000 UTC m=+0.107643285 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 10:08:54 np0005539860 python3.9[106456]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764428933.5840635-446-135576050044751/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:08:54 np0005539860 python3.9[106539]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:08:54 np0005539860 systemd[1]: Reloading.
Nov 29 10:08:55 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:08:55 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:08:55 np0005539860 python3.9[106651]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:08:56 np0005539860 systemd[1]: Reloading.
Nov 29 10:08:56 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:08:56 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:08:57 np0005539860 systemd[1]: Starting ovn_metadata_agent container...
Nov 29 10:08:57 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:08:57 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f94445e79e6a7a397b60ba38a16b6b6f339c6321f24757f84fbc7c61827350/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Nov 29 10:08:57 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/90f94445e79e6a7a397b60ba38a16b6b6f339c6321f24757f84fbc7c61827350/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 10:08:57 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.
Nov 29 10:08:57 np0005539860 podman[106692]: 2025-11-29 15:08:57.253491746 +0000 UTC m=+0.141501139 container init 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + sudo -E kolla_set_configs
Nov 29 10:08:57 np0005539860 podman[106692]: 2025-11-29 15:08:57.293108904 +0000 UTC m=+0.181118357 container start 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 10:08:57 np0005539860 edpm-start-podman-container[106692]: ovn_metadata_agent
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Validating config file
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Copying service configuration files
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Writing out command to execute
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron/external
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: ++ cat /run_command
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + CMD=neutron-ovn-metadata-agent
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + ARGS=
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + sudo kolla_copy_cacerts
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + [[ ! -n '' ]]
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + . kolla_extend_start
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: Running command: 'neutron-ovn-metadata-agent'
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + umask 0022
Nov 29 10:08:57 np0005539860 ovn_metadata_agent[106708]: + exec neutron-ovn-metadata-agent
Nov 29 10:08:57 np0005539860 edpm-start-podman-container[106691]: Creating additional drop-in dependency for "ovn_metadata_agent" (39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1)
Nov 29 10:08:57 np0005539860 podman[106715]: 2025-11-29 15:08:57.389687955 +0000 UTC m=+0.077718515 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 10:08:57 np0005539860 systemd[1]: Reloading.
Nov 29 10:08:57 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:08:57 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:08:57 np0005539860 systemd[1]: Started ovn_metadata_agent container.
Nov 29 10:08:58 np0005539860 systemd[1]: session-21.scope: Deactivated successfully.
Nov 29 10:08:58 np0005539860 systemd[1]: session-21.scope: Consumed 40.668s CPU time.
Nov 29 10:08:58 np0005539860 systemd-logind[794]: Session 21 logged out. Waiting for processes to exit.
Nov 29 10:08:58 np0005539860 systemd-logind[794]: Removed session 21.
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.098 106713 INFO neutron.common.config [-] Logging enabled!#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.098 106713 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.098 106713 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.099 106713 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.100 106713 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.101 106713 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.102 106713 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.103 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.104 106713 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.105 106713 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.106 106713 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.107 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.108 106713 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.109 106713 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.110 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.111 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.112 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.113 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.114 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.115 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.116 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.117 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.118 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.119 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.120 106713 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.121 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.122 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.123 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.124 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.125 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.126 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.127 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.128 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.129 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.130 106713 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.176 106713 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.176 106713 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.177 106713 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.177 106713 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.177 106713 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.190 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a (UUID: 3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.220 106713 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.220 106713 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.220 106713 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.221 106713 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.223 106713 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.231 106713 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.238 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], external_ids={}, name=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, nb_cfg_timestamp=1764428881942, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.239 106713 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fcffd90cdc0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.240 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.240 106713 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.240 106713 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.241 106713 INFO oslo_service.service [-] Starting 1 workers#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.244 106713 DEBUG oslo_service.service [-] Started child 106814 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.248 106713 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpsn8e4q0p/privsep.sock']#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.250 106814 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-166388'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.293 106814 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.294 106814 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.295 106814 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.302 106814 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.318 106814 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.334 106814 INFO eventlet.wsgi.server [-] (106814) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Nov 29 10:08:59 np0005539860 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.943 106713 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.944 106713 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpsn8e4q0p/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.821 106819 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.829 106819 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.833 106819 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.834 106819 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106819#033[00m
Nov 29 10:08:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:08:59.949 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[e7fcd0f0-5bda-4484-8e44-1839c380c873]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.409 106819 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.409 106819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.409 106819 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.914 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[5e4b03f9-322a-499e-97b5-c29406b715d5]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.918 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, column=external_ids, values=({'neutron:ovn-metadata-id': '7b45f684-fa1a-50c1-9126-d3dc90236c7f'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.931 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.938 106713 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.938 106713 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.939 106713 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.939 106713 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.939 106713 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.939 106713 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.940 106713 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.940 106713 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.940 106713 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.941 106713 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.941 106713 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.942 106713 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.942 106713 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.942 106713 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.943 106713 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.943 106713 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.943 106713 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.944 106713 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.944 106713 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.944 106713 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.944 106713 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.945 106713 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.945 106713 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.945 106713 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.946 106713 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.946 106713 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.946 106713 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.947 106713 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.947 106713 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.947 106713 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.948 106713 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.948 106713 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.949 106713 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.949 106713 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.950 106713 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.950 106713 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.951 106713 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.951 106713 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.951 106713 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.952 106713 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.952 106713 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.953 106713 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.953 106713 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.954 106713 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.954 106713 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.954 106713 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.955 106713 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.955 106713 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.955 106713 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.956 106713 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.956 106713 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.956 106713 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.957 106713 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.957 106713 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.957 106713 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.958 106713 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.958 106713 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.958 106713 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.959 106713 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.959 106713 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.959 106713 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.960 106713 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.960 106713 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.960 106713 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.961 106713 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.961 106713 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.961 106713 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.962 106713 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.962 106713 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.962 106713 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.963 106713 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.963 106713 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.963 106713 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.964 106713 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.964 106713 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.964 106713 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.965 106713 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.965 106713 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.965 106713 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.966 106713 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.966 106713 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.966 106713 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.967 106713 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.967 106713 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.967 106713 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.968 106713 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.968 106713 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.968 106713 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.969 106713 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.969 106713 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.969 106713 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.969 106713 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.970 106713 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.970 106713 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.970 106713 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.970 106713 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.971 106713 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.971 106713 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.971 106713 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.971 106713 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.971 106713 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.972 106713 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.972 106713 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.972 106713 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.973 106713 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.973 106713 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.973 106713 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.974 106713 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.974 106713 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.974 106713 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.974 106713 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.975 106713 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.975 106713 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.975 106713 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.975 106713 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.976 106713 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.976 106713 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.976 106713 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.976 106713 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.977 106713 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.977 106713 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.977 106713 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.978 106713 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.978 106713 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.978 106713 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.979 106713 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.979 106713 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.979 106713 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.979 106713 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.979 106713 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.980 106713 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.980 106713 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.980 106713 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.981 106713 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.981 106713 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.981 106713 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.982 106713 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.982 106713 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.982 106713 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.983 106713 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.983 106713 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.983 106713 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.983 106713 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.984 106713 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.984 106713 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.984 106713 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.985 106713 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.985 106713 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.985 106713 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.985 106713 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.986 106713 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.986 106713 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.986 106713 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.987 106713 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.987 106713 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.987 106713 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.987 106713 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.988 106713 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.988 106713 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.988 106713 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.988 106713 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.989 106713 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.989 106713 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.989 106713 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.989 106713 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.990 106713 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.990 106713 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.990 106713 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.990 106713 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.990 106713 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.991 106713 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.991 106713 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.991 106713 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.992 106713 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.992 106713 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.993 106713 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.993 106713 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.993 106713 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.994 106713 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.994 106713 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.994 106713 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.995 106713 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.995 106713 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.995 106713 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.996 106713 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.996 106713 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.997 106713 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.997 106713 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.997 106713 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.998 106713 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.998 106713 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.998 106713 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.999 106713 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:00 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.999 106713 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:00.999 106713 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.000 106713 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.000 106713 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.000 106713 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.001 106713 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.001 106713 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.001 106713 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.002 106713 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.002 106713 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.002 106713 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.002 106713 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.002 106713 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.003 106713 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.003 106713 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.003 106713 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.003 106713 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.003 106713 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.004 106713 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.004 106713 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.004 106713 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.004 106713 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.004 106713 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.005 106713 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.005 106713 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.005 106713 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.005 106713 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.005 106713 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.006 106713 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.006 106713 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.006 106713 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.006 106713 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.006 106713 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.007 106713 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.007 106713 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.007 106713 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.007 106713 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.007 106713 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.007 106713 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.008 106713 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.008 106713 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.008 106713 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.008 106713 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.008 106713 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.009 106713 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.009 106713 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.009 106713 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.009 106713 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.010 106713 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.010 106713 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.010 106713 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.010 106713 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.010 106713 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.011 106713 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.011 106713 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.011 106713 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.011 106713 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.011 106713 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.012 106713 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.012 106713 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.012 106713 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.012 106713 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.012 106713 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.013 106713 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.013 106713 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.013 106713 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.013 106713 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.013 106713 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.014 106713 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.014 106713 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.014 106713 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.014 106713 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.014 106713 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.015 106713 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.015 106713 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.015 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.015 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.015 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.016 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.016 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.016 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.016 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.016 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.017 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.017 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.017 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.017 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.017 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.018 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.018 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.018 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.018 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.018 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.019 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.020 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.020 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.020 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.020 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.020 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.020 106713 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.021 106713 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.021 106713 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.021 106713 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.021 106713 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:09:01 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:01.021 106713 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 10:09:03 np0005539860 systemd-logind[794]: New session 22 of user zuul.
Nov 29 10:09:03 np0005539860 systemd[1]: Started Session 22 of User zuul.
Nov 29 10:09:04 np0005539860 python3.9[106977]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:09:05 np0005539860 python3.9[107133]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:07 np0005539860 python3.9[107298]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:09:07 np0005539860 systemd[1]: Reloading.
Nov 29 10:09:07 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:09:07 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:09:08 np0005539860 python3.9[107483]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:09:08 np0005539860 network[107500]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:09:08 np0005539860 network[107501]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:09:08 np0005539860 network[107502]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:09:16 np0005539860 python3.9[107763]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:17 np0005539860 python3.9[107916]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:17 np0005539860 python3.9[108069]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:18 np0005539860 python3.9[108222]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:19 np0005539860 python3.9[108375]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:20 np0005539860 python3.9[108528]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:21 np0005539860 python3.9[108681]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:09:22 np0005539860 python3.9[108834]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:23 np0005539860 python3.9[108986]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:24 np0005539860 python3.9[109138]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:24 np0005539860 podman[109186]: 2025-11-29 15:09:24.656998847 +0000 UTC m=+0.110848049 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 10:09:25 np0005539860 python3.9[109314]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:25 np0005539860 python3.9[109466]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:26 np0005539860 python3.9[109618]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:27 np0005539860 python3.9[109770]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:27 np0005539860 podman[109819]: 2025-11-29 15:09:27.600327093 +0000 UTC m=+0.057780890 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 10:09:28 np0005539860 python3.9[109942]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:28 np0005539860 python3.9[110094]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:29 np0005539860 python3.9[110246]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:30 np0005539860 python3.9[110398]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:31 np0005539860 python3.9[110550]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:31 np0005539860 python3.9[110702]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:32 np0005539860 python3.9[110854]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:09:33 np0005539860 python3.9[111006]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:34 np0005539860 python3.9[111158]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:09:35 np0005539860 python3.9[111310]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:09:35 np0005539860 systemd[1]: Reloading.
Nov 29 10:09:35 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:09:35 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:09:36 np0005539860 python3.9[111499]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:37 np0005539860 python3.9[111652]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:37 np0005539860 python3.9[111805]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:38 np0005539860 python3.9[111958]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:39 np0005539860 python3.9[112111]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:39 np0005539860 python3.9[112264]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:40 np0005539860 python3.9[112417]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:09:41 np0005539860 python3.9[112570]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Nov 29 10:09:42 np0005539860 python3.9[112723]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 10:09:43 np0005539860 python3.9[112881]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 10:09:44 np0005539860 python3.9[113041]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:09:45 np0005539860 python3.9[113125]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:09:55 np0005539860 podman[113189]: 2025-11-29 15:09:55.667226434 +0000 UTC m=+0.109710099 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 10:09:58 np0005539860 podman[113323]: 2025-11-29 15:09:58.604442795 +0000 UTC m=+0.056666593 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 10:09:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:59.133 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:09:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:59.133 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:09:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:09:59.134 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:10:12 np0005539860 kernel: SELinux:  Converting 2757 SID table entries...
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 10:10:12 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 10:10:21 np0005539860 kernel: SELinux:  Converting 2757 SID table entries...
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 10:10:21 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 10:10:26 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Nov 29 10:10:26 np0005539860 podman[113379]: 2025-11-29 15:10:26.677743965 +0000 UTC m=+0.107627417 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 10:10:29 np0005539860 podman[113405]: 2025-11-29 15:10:29.624610155 +0000 UTC m=+0.071337780 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 29 10:10:57 np0005539860 podman[127087]: 2025-11-29 15:10:57.618141586 +0000 UTC m=+0.072293240 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 10:10:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:10:59.134 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:10:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:10:59.135 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:10:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:10:59.135 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:11:00 np0005539860 podman[128898]: 2025-11-29 15:11:00.6445698 +0000 UTC m=+0.093278095 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:11:15 np0005539860 kernel: SELinux:  Converting 2758 SID table entries...
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability network_peer_controls=1
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability open_perms=1
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability extended_socket_class=1
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability always_check_network=0
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability cgroup_seclabel=1
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Nov 29 10:11:15 np0005539860 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Nov 29 10:11:16 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 10:11:16 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Nov 29 10:11:16 np0005539860 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Nov 29 10:11:23 np0005539860 systemd[1]: Stopping OpenSSH server daemon...
Nov 29 10:11:23 np0005539860 systemd[1]: sshd.service: Deactivated successfully.
Nov 29 10:11:23 np0005539860 systemd[1]: Stopped OpenSSH server daemon.
Nov 29 10:11:23 np0005539860 systemd[1]: sshd.service: Consumed 3.537s CPU time, read 32.0K from disk, written 24.0K to disk.
Nov 29 10:11:23 np0005539860 systemd[1]: Stopped target sshd-keygen.target.
Nov 29 10:11:23 np0005539860 systemd[1]: Stopping sshd-keygen.target...
Nov 29 10:11:23 np0005539860 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 10:11:23 np0005539860 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 10:11:23 np0005539860 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Nov 29 10:11:23 np0005539860 systemd[1]: Reached target sshd-keygen.target.
Nov 29 10:11:24 np0005539860 systemd[1]: Starting OpenSSH server daemon...
Nov 29 10:11:24 np0005539860 systemd[1]: Started OpenSSH server daemon.
Nov 29 10:11:26 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 10:11:26 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 10:11:26 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:26 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:26 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:26 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 10:11:28 np0005539860 podman[133870]: 2025-11-29 15:11:28.676230931 +0000 UTC m=+0.126527973 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:11:30 np0005539860 python3.9[135755]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:11:30 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:30 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:30 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:30 np0005539860 podman[136254]: 2025-11-29 15:11:30.736401058 +0000 UTC m=+0.054758595 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 10:11:31 np0005539860 python3.9[136911]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:11:31 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:31 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:31 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:32 np0005539860 python3.9[138189]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:11:32 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:32 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:32 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:33 np0005539860 python3.9[139456]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:11:33 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:33 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:33 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:34 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 10:11:34 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 10:11:34 np0005539860 systemd[1]: man-db-cache-update.service: Consumed 10.813s CPU time.
Nov 29 10:11:34 np0005539860 systemd[1]: run-re5f37dbfcab34374a3647cae5a1127be.service: Deactivated successfully.
Nov 29 10:11:34 np0005539860 python3.9[140635]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:35 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:35 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:35 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:36 np0005539860 python3.9[140826]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:36 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:36 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:36 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:37 np0005539860 python3.9[141015]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:38 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:38 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:38 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:39 np0005539860 python3.9[141205]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:40 np0005539860 python3.9[141360]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:40 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:40 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:40 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:42 np0005539860 python3.9[141550]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Nov 29 10:11:42 np0005539860 systemd[1]: Reloading.
Nov 29 10:11:42 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:11:42 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:11:42 np0005539860 systemd[1]: Listening on libvirt proxy daemon socket.
Nov 29 10:11:42 np0005539860 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Nov 29 10:11:43 np0005539860 python3.9[141743]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:44 np0005539860 python3.9[141898]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:47 np0005539860 python3.9[142053]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:48 np0005539860 python3.9[142208]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:49 np0005539860 python3.9[142363]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:50 np0005539860 python3.9[142518]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:51 np0005539860 python3.9[142673]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:52 np0005539860 python3.9[142828]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:53 np0005539860 python3.9[142983]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:54 np0005539860 python3.9[143138]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:55 np0005539860 python3.9[143293]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:56 np0005539860 python3.9[143448]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:57 np0005539860 python3.9[143603]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:58 np0005539860 python3.9[143758]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Nov 29 10:11:58 np0005539860 podman[143885]: 2025-11-29 15:11:58.899473546 +0000 UTC m=+0.161380039 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 10:11:59 np0005539860 python3.9[143935]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:11:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:11:59.136 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:11:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:11:59.136 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:11:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:11:59.136 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:11:59 np0005539860 python3.9[144093]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:12:00 np0005539860 python3.9[144245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:12:00 np0005539860 podman[144369]: 2025-11-29 15:12:00.879953861 +0000 UTC m=+0.072175870 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 10:12:01 np0005539860 python3.9[144416]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:12:01 np0005539860 python3.9[144568]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:12:02 np0005539860 python3.9[144720]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:12:03 np0005539860 python3.9[144872]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:04 np0005539860 python3.9[144997]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429122.7269857-554-2642035867898/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:05 np0005539860 python3.9[145149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:05 np0005539860 python3.9[145274]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429124.655171-554-5273340454588/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:06 np0005539860 python3.9[145426]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:07 np0005539860 python3.9[145551]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429126.1912384-554-185344545128597/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:08 np0005539860 python3.9[145703]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:09 np0005539860 python3.9[145828]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429127.7622497-554-264988505293722/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:09 np0005539860 python3.9[145980]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:10 np0005539860 python3.9[146105]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429129.2868373-554-188592237866110/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:11 np0005539860 python3.9[146257]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:12 np0005539860 python3.9[146382]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429130.6689608-554-198522055250364/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:12 np0005539860 python3.9[146534]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:13 np0005539860 python3.9[146657]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429132.253753-554-129748617704652/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:14 np0005539860 python3.9[146809]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:15 np0005539860 python3.9[146934]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1764429133.824836-554-258692667958033/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:15 np0005539860 python3.9[147086]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Nov 29 10:12:16 np0005539860 python3.9[147239]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:17 np0005539860 python3.9[147391]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:18 np0005539860 python3.9[147543]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:19 np0005539860 python3.9[147695]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:19 np0005539860 python3.9[147847]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:20 np0005539860 python3.9[147999]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:21 np0005539860 python3.9[148151]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:22 np0005539860 python3.9[148303]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:23 np0005539860 python3.9[148455]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:23 np0005539860 python3.9[148607]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:24 np0005539860 python3.9[148759]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:25 np0005539860 python3.9[148911]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:25 np0005539860 python3.9[149063]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:26 np0005539860 python3.9[149215]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:27 np0005539860 python3.9[149367]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:28 np0005539860 python3.9[149490]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429146.9811623-775-201038037152122/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:28 np0005539860 python3.9[149642]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:29 np0005539860 podman[149737]: 2025-11-29 15:12:29.410514234 +0000 UTC m=+0.115691542 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller)
Nov 29 10:12:29 np0005539860 python3.9[149779]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429148.3695927-775-16081996784277/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:30 np0005539860 python3.9[149941]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:31 np0005539860 python3.9[150064]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429149.7233288-775-25571950135045/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:31 np0005539860 podman[150165]: 2025-11-29 15:12:31.644491492 +0000 UTC m=+0.081155784 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 10:12:31 np0005539860 python3.9[150232]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:32 np0005539860 python3.9[150355]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429151.27795-775-254318369235243/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:33 np0005539860 python3.9[150507]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:34 np0005539860 python3.9[150630]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429152.7458642-775-8865820764175/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:34 np0005539860 python3.9[150782]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:35 np0005539860 python3.9[150905]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429154.290181-775-81404457277448/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:36 np0005539860 python3.9[151057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:37 np0005539860 python3.9[151180]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429155.7951586-775-118495390090911/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:37 np0005539860 python3.9[151332]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:38 np0005539860 python3.9[151455]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429157.2984266-775-36621356152111/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:39 np0005539860 python3.9[151607]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:40 np0005539860 python3.9[151730]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429158.8185284-775-84019157256992/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:40 np0005539860 python3.9[151882]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:41 np0005539860 python3.9[152005]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429160.3128815-775-228012451784413/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:42 np0005539860 python3.9[152157]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:43 np0005539860 python3.9[152280]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429161.854205-775-163545631259689/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:43 np0005539860 python3.9[152432]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:44 np0005539860 python3.9[152555]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429163.3979495-775-132648640348098/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:45 np0005539860 python3.9[152707]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:45 np0005539860 python3.9[152830]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429164.841984-775-81670377421827/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:46 np0005539860 python3.9[152982]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:12:47 np0005539860 python3.9[153105]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429166.1612852-775-181849931623642/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:48 np0005539860 python3.9[153255]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:12:49 np0005539860 python3.9[153410]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Nov 29 10:12:51 np0005539860 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Nov 29 10:12:51 np0005539860 python3.9[153566]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:52 np0005539860 python3.9[153718]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:52 np0005539860 python3.9[153870]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:53 np0005539860 python3.9[154022]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:54 np0005539860 python3.9[154174]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:55 np0005539860 python3.9[154326]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:55 np0005539860 python3.9[154478]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:56 np0005539860 python3.9[154630]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:57 np0005539860 python3.9[154782]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:58 np0005539860 python3.9[154934]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:12:59 np0005539860 python3.9[155086]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:12:59 np0005539860 systemd[1]: Reloading.
Nov 29 10:12:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:12:59.136 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:12:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:12:59.138 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:12:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:12:59.138 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:12:59 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:12:59 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:12:59 np0005539860 systemd[1]: Starting libvirt logging daemon socket...
Nov 29 10:12:59 np0005539860 systemd[1]: Listening on libvirt logging daemon socket.
Nov 29 10:12:59 np0005539860 systemd[1]: Starting libvirt logging daemon admin socket...
Nov 29 10:12:59 np0005539860 systemd[1]: Listening on libvirt logging daemon admin socket.
Nov 29 10:12:59 np0005539860 systemd[1]: Starting libvirt logging daemon...
Nov 29 10:12:59 np0005539860 systemd[1]: Started libvirt logging daemon.
Nov 29 10:12:59 np0005539860 podman[155129]: 2025-11-29 15:12:59.602312759 +0000 UTC m=+0.093966991 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:13:00 np0005539860 python3.9[155305]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:13:00 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:00 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:00 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:00 np0005539860 systemd[1]: Starting libvirt nodedev daemon socket...
Nov 29 10:13:00 np0005539860 systemd[1]: Listening on libvirt nodedev daemon socket.
Nov 29 10:13:00 np0005539860 systemd[1]: Starting libvirt nodedev daemon admin socket...
Nov 29 10:13:00 np0005539860 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Nov 29 10:13:00 np0005539860 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Nov 29 10:13:00 np0005539860 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Nov 29 10:13:00 np0005539860 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 10:13:00 np0005539860 systemd[1]: Started libvirt nodedev daemon.
Nov 29 10:13:01 np0005539860 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Nov 29 10:13:01 np0005539860 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Nov 29 10:13:01 np0005539860 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Nov 29 10:13:01 np0005539860 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Nov 29 10:13:01 np0005539860 python3.9[155528]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:13:01 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:01 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:01 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:01 np0005539860 podman[155533]: 2025-11-29 15:13:01.930630744 +0000 UTC m=+0.120803970 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 10:13:02 np0005539860 systemd[1]: Starting libvirt proxy daemon admin socket...
Nov 29 10:13:02 np0005539860 systemd[1]: Starting libvirt proxy daemon read-only socket...
Nov 29 10:13:02 np0005539860 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Nov 29 10:13:02 np0005539860 systemd[1]: Listening on libvirt proxy daemon admin socket.
Nov 29 10:13:02 np0005539860 systemd[1]: Starting libvirt proxy daemon...
Nov 29 10:13:02 np0005539860 systemd[1]: Started libvirt proxy daemon.
Nov 29 10:13:02 np0005539860 setroubleshoot[155398]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ada630da-654b-4663-93aa-573f683d9ca3
Nov 29 10:13:02 np0005539860 setroubleshoot[155398]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 10:13:02 np0005539860 setroubleshoot[155398]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l ada630da-654b-4663-93aa-573f683d9ca3
Nov 29 10:13:02 np0005539860 setroubleshoot[155398]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Nov 29 10:13:02 np0005539860 python3.9[155762]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:13:03 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:03 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:03 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:04 np0005539860 systemd[1]: Listening on libvirt locking daemon socket.
Nov 29 10:13:04 np0005539860 systemd[1]: Starting libvirt QEMU daemon socket...
Nov 29 10:13:04 np0005539860 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Nov 29 10:13:04 np0005539860 systemd[1]: Starting Virtual Machine and Container Registration Service...
Nov 29 10:13:04 np0005539860 systemd[1]: Listening on libvirt QEMU daemon socket.
Nov 29 10:13:04 np0005539860 systemd[1]: Starting libvirt QEMU daemon admin socket...
Nov 29 10:13:04 np0005539860 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Nov 29 10:13:04 np0005539860 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Nov 29 10:13:04 np0005539860 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Nov 29 10:13:04 np0005539860 systemd[1]: Started Virtual Machine and Container Registration Service.
Nov 29 10:13:04 np0005539860 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 10:13:04 np0005539860 systemd[1]: Started libvirt QEMU daemon.
Nov 29 10:13:05 np0005539860 python3.9[155977]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:13:05 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:05 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:05 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:05 np0005539860 systemd[1]: Starting libvirt secret daemon socket...
Nov 29 10:13:05 np0005539860 systemd[1]: Listening on libvirt secret daemon socket.
Nov 29 10:13:05 np0005539860 systemd[1]: Starting libvirt secret daemon admin socket...
Nov 29 10:13:05 np0005539860 systemd[1]: Starting libvirt secret daemon read-only socket...
Nov 29 10:13:05 np0005539860 systemd[1]: Listening on libvirt secret daemon admin socket.
Nov 29 10:13:05 np0005539860 systemd[1]: Listening on libvirt secret daemon read-only socket.
Nov 29 10:13:05 np0005539860 systemd[1]: Starting libvirt secret daemon...
Nov 29 10:13:05 np0005539860 systemd[1]: Started libvirt secret daemon.
Nov 29 10:13:06 np0005539860 python3.9[156189]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:07 np0005539860 python3.9[156341]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:13:08 np0005539860 python3.9[156493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:08 np0005539860 python3.9[156616]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429187.4878845-1120-170825430720622/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:09 np0005539860 python3.9[156768]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:10 np0005539860 python3.9[156920]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:10 np0005539860 python3.9[156998]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:11 np0005539860 python3.9[157151]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:12 np0005539860 python3.9[157229]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.0785ba8b recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:12 np0005539860 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Nov 29 10:13:12 np0005539860 systemd[1]: setroubleshootd.service: Deactivated successfully.
Nov 29 10:13:12 np0005539860 python3.9[157381]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:13 np0005539860 python3.9[157459]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:14 np0005539860 python3.9[157611]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:13:15 np0005539860 python3[157764]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 10:13:16 np0005539860 python3.9[157916]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:16 np0005539860 python3.9[157994]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:17 np0005539860 python3.9[158146]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:17 np0005539860 python3.9[158224]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:18 np0005539860 python3.9[158376]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:19 np0005539860 python3.9[158454]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:20 np0005539860 python3.9[158606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:20 np0005539860 python3.9[158684]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:21 np0005539860 python3.9[158836]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:22 np0005539860 python3.9[158961]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429200.835921-1245-271160102064749/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:22 np0005539860 python3.9[159113]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:23 np0005539860 python3.9[159265]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:13:24 np0005539860 python3.9[159420]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:25 np0005539860 python3.9[159572]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:13:26 np0005539860 python3.9[159725]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:13:27 np0005539860 python3.9[159879]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:13:28 np0005539860 python3.9[160034]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:28 np0005539860 python3.9[160186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:29 np0005539860 python3.9[160309]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429208.3238864-1317-152876848306938/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:30 np0005539860 podman[160433]: 2025-11-29 15:13:30.258338612 +0000 UTC m=+0.174709706 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 10:13:30 np0005539860 python3.9[160478]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:30 np0005539860 python3.9[160610]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429209.7588186-1332-190510751077739/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:31 np0005539860 python3.9[160762]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:13:32 np0005539860 podman[160857]: 2025-11-29 15:13:32.276256192 +0000 UTC m=+0.074900336 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:13:32 np0005539860 python3.9[160904]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429211.2258627-1347-73305736255566/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:13:33 np0005539860 python3.9[161056]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:13:33 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:33 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:33 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:33 np0005539860 systemd[1]: Reached target edpm_libvirt.target.
Nov 29 10:13:34 np0005539860 python3.9[161247]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Nov 29 10:13:34 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:34 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:34 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:35 np0005539860 systemd[1]: Reloading.
Nov 29 10:13:35 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:13:35 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:13:35 np0005539860 systemd[1]: session-22.scope: Deactivated successfully.
Nov 29 10:13:35 np0005539860 systemd[1]: session-22.scope: Consumed 3min 37.975s CPU time.
Nov 29 10:13:35 np0005539860 systemd-logind[794]: Session 22 logged out. Waiting for processes to exit.
Nov 29 10:13:35 np0005539860 systemd-logind[794]: Removed session 22.
Nov 29 10:13:41 np0005539860 systemd-logind[794]: New session 23 of user zuul.
Nov 29 10:13:41 np0005539860 systemd[1]: Started Session 23 of User zuul.
Nov 29 10:13:43 np0005539860 python3.9[161498]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:13:44 np0005539860 python3.9[161652]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:13:44 np0005539860 network[161669]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:13:44 np0005539860 network[161670]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:13:44 np0005539860 network[161671]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:13:55 np0005539860 python3.9[161944]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:13:56 np0005539860 python3.9[162028]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:13:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:13:59.136 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:13:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:13:59.138 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:13:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:13:59.138 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:14:00 np0005539860 podman[162030]: 2025-11-29 15:14:00.701974295 +0000 UTC m=+0.147220958 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Nov 29 10:14:02 np0005539860 python3.9[162207]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:14:02 np0005539860 podman[162232]: 2025-11-29 15:14:02.600684385 +0000 UTC m=+0.055099687 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 10:14:03 np0005539860 python3.9[162379]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:14:04 np0005539860 python3.9[162532]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:14:04 np0005539860 python3.9[162684]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:14:05 np0005539860 python3.9[162837]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:06 np0005539860 python3.9[162960]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429244.8733196-95-103285459419004/.source.iscsi _original_basename=.wejiy4fr follow=False checksum=669174fee616dd4e4936a918bac9df5e93cae449 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:07 np0005539860 python3.9[163112]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:08 np0005539860 python3.9[163264]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:08 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 10:14:08 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 10:14:09 np0005539860 python3.9[163417]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:14:09 np0005539860 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Nov 29 10:14:10 np0005539860 python3.9[163573]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:14:10 np0005539860 systemd[1]: Reloading.
Nov 29 10:14:10 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:14:10 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:14:10 np0005539860 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 10:14:10 np0005539860 systemd[1]: Starting Open-iSCSI...
Nov 29 10:14:10 np0005539860 kernel: Loading iSCSI transport class v2.0-870.
Nov 29 10:14:10 np0005539860 systemd[1]: Started Open-iSCSI.
Nov 29 10:14:10 np0005539860 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Nov 29 10:14:10 np0005539860 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Nov 29 10:14:11 np0005539860 python3.9[163774]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:14:11 np0005539860 network[163791]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:14:11 np0005539860 network[163792]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:14:11 np0005539860 network[163793]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:14:16 np0005539860 python3.9[164064]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 10:14:17 np0005539860 python3.9[164216]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Nov 29 10:14:18 np0005539860 python3.9[164372]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:18 np0005539860 python3.9[164495]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429257.7205553-172-248868856929526/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:19 np0005539860 python3.9[164647]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:20 np0005539860 python3.9[164799]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:14:20 np0005539860 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 10:14:20 np0005539860 systemd[1]: Stopped Load Kernel Modules.
Nov 29 10:14:20 np0005539860 systemd[1]: Stopping Load Kernel Modules...
Nov 29 10:14:20 np0005539860 systemd[1]: Starting Load Kernel Modules...
Nov 29 10:14:20 np0005539860 systemd[1]: Finished Load Kernel Modules.
Nov 29 10:14:21 np0005539860 python3.9[164955]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:22 np0005539860 python3.9[165107]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:14:23 np0005539860 python3.9[165259]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:14:23 np0005539860 python3.9[165411]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:24 np0005539860 python3.9[165534]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429263.243007-230-185168919997392/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:25 np0005539860 python3.9[165686]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:14:25 np0005539860 python3.9[165839]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:26 np0005539860 python3.9[165991]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:27 np0005539860 python3.9[166143]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:28 np0005539860 python3.9[166295]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:29 np0005539860 python3.9[166447]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:29 np0005539860 python3.9[166599]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:30 np0005539860 python3.9[166751]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:31 np0005539860 podman[166875]: 2025-11-29 15:14:31.329826554 +0000 UTC m=+0.151494241 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 10:14:31 np0005539860 python3.9[166920]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:14:32 np0005539860 python3.9[167084]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:33 np0005539860 podman[167208]: 2025-11-29 15:14:33.02882787 +0000 UTC m=+0.069135640 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:14:33 np0005539860 python3.9[167255]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:34 np0005539860 python3.9[167407]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:34 np0005539860 python3.9[167485]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:35 np0005539860 python3.9[167637]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:35 np0005539860 python3.9[167715]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:36 np0005539860 python3.9[167867]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:37 np0005539860 python3.9[168019]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:38 np0005539860 python3.9[168097]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:38 np0005539860 python3.9[168249]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:39 np0005539860 python3.9[168327]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:40 np0005539860 python3.9[168479]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:14:40 np0005539860 systemd[1]: Reloading.
Nov 29 10:14:40 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:14:40 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:14:42 np0005539860 python3.9[168668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:43 np0005539860 python3.9[168746]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:44 np0005539860 python3.9[168898]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:44 np0005539860 python3.9[168976]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:45 np0005539860 python3.9[169128]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:14:45 np0005539860 systemd[1]: Reloading.
Nov 29 10:14:45 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:14:45 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:14:45 np0005539860 systemd[1]: Starting Create netns directory...
Nov 29 10:14:45 np0005539860 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Nov 29 10:14:45 np0005539860 systemd[1]: netns-placeholder.service: Deactivated successfully.
Nov 29 10:14:45 np0005539860 systemd[1]: Finished Create netns directory.
Nov 29 10:14:46 np0005539860 python3.9[169321]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:47 np0005539860 python3.9[169473]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:48 np0005539860 python3.9[169596]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429286.9306216-437-99049072523334/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:48 np0005539860 python3.9[169748]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:14:49 np0005539860 python3.9[169900]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:14:50 np0005539860 python3.9[170023]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429289.1668708-462-28604242142414/.source.json _original_basename=.am6rym6j follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:51 np0005539860 python3.9[170175]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:14:53 np0005539860 python3.9[170602]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Nov 29 10:14:54 np0005539860 python3.9[170754]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:14:55 np0005539860 python3.9[170906]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Nov 29 10:14:57 np0005539860 python3[171085]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:14:57 np0005539860 podman[171122]: 2025-11-29 15:14:57.773388487 +0000 UTC m=+0.069385065 container create 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 10:14:57 np0005539860 podman[171122]: 2025-11-29 15:14:57.73373712 +0000 UTC m=+0.029733738 image pull f275b8d168f7f57f31e3da49224019f39f95c80a833f083696a964527b07b54f quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 10:14:57 np0005539860 python3[171085]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Nov 29 10:14:58 np0005539860 python3.9[171314]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:14:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:14:59.137 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:14:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:14:59.138 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:14:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:14:59.139 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:14:59 np0005539860 python3.9[171468]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:00 np0005539860 python3.9[171544]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:15:00 np0005539860 systemd[1]: virtnodedevd.service: Deactivated successfully.
Nov 29 10:15:00 np0005539860 python3.9[171695]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429300.2730012-550-81176686624893/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:01 np0005539860 python3.9[171772]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:15:01 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:01 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:01 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:01 np0005539860 podman[171774]: 2025-11-29 15:15:01.623625718 +0000 UTC m=+0.111207460 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 10:15:02 np0005539860 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 10:15:02 np0005539860 python3.9[171908]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:02 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:02 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:02 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:02 np0005539860 systemd[1]: Starting multipathd container...
Nov 29 10:15:02 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:15:02 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85fd2fefae0173eda694734ab4acb7c6b2bb30db1ec53399e7350c9d5bf913/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 10:15:02 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85fd2fefae0173eda694734ab4acb7c6b2bb30db1ec53399e7350c9d5bf913/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 10:15:02 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.
Nov 29 10:15:02 np0005539860 podman[171949]: 2025-11-29 15:15:02.954875188 +0000 UTC m=+0.130069683 container init 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 10:15:02 np0005539860 multipathd[171964]: + sudo -E kolla_set_configs
Nov 29 10:15:02 np0005539860 podman[171949]: 2025-11-29 15:15:02.993211659 +0000 UTC m=+0.168406124 container start 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:15:02 np0005539860 podman[171949]: multipathd
Nov 29 10:15:03 np0005539860 systemd[1]: Started multipathd container.
Nov 29 10:15:03 np0005539860 multipathd[171964]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:15:03 np0005539860 multipathd[171964]: INFO:__main__:Validating config file
Nov 29 10:15:03 np0005539860 multipathd[171964]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:15:03 np0005539860 multipathd[171964]: INFO:__main__:Writing out command to execute
Nov 29 10:15:03 np0005539860 multipathd[171964]: ++ cat /run_command
Nov 29 10:15:03 np0005539860 multipathd[171964]: + CMD='/usr/sbin/multipathd -d'
Nov 29 10:15:03 np0005539860 multipathd[171964]: + ARGS=
Nov 29 10:15:03 np0005539860 multipathd[171964]: + sudo kolla_copy_cacerts
Nov 29 10:15:03 np0005539860 podman[171971]: 2025-11-29 15:15:03.088001603 +0000 UTC m=+0.075865211 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:15:03 np0005539860 multipathd[171964]: + [[ ! -n '' ]]
Nov 29 10:15:03 np0005539860 multipathd[171964]: + . kolla_extend_start
Nov 29 10:15:03 np0005539860 multipathd[171964]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 10:15:03 np0005539860 multipathd[171964]: Running command: '/usr/sbin/multipathd -d'
Nov 29 10:15:03 np0005539860 multipathd[171964]: + umask 0022
Nov 29 10:15:03 np0005539860 multipathd[171964]: + exec /usr/sbin/multipathd -d
Nov 29 10:15:03 np0005539860 systemd[1]: 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88-6c19bb71f252f869.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:15:03 np0005539860 systemd[1]: 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88-6c19bb71f252f869.service: Failed with result 'exit-code'.
Nov 29 10:15:03 np0005539860 multipathd[171964]: 3058.816491 | --------start up--------
Nov 29 10:15:03 np0005539860 multipathd[171964]: 3058.816509 | read /etc/multipath.conf
Nov 29 10:15:03 np0005539860 multipathd[171964]: 3058.824935 | path checkers start up
Nov 29 10:15:03 np0005539860 podman[172016]: 2025-11-29 15:15:03.175074617 +0000 UTC m=+0.060929175 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 10:15:03 np0005539860 python3.9[172172]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:15:04 np0005539860 python3.9[172326]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:04 np0005539860 systemd[1]: virtqemud.service: Deactivated successfully.
Nov 29 10:15:05 np0005539860 python3.9[172492]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:15:05 np0005539860 systemd[1]: Stopping multipathd container...
Nov 29 10:15:05 np0005539860 multipathd[171964]: 3060.974048 | exit (signal)
Nov 29 10:15:05 np0005539860 multipathd[171964]: 3060.974115 | --------shut down-------
Nov 29 10:15:05 np0005539860 systemd[1]: libpod-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope: Deactivated successfully.
Nov 29 10:15:05 np0005539860 podman[172496]: 2025-11-29 15:15:05.300122062 +0000 UTC m=+0.072547371 container died 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 10:15:05 np0005539860 systemd[1]: 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88-6c19bb71f252f869.timer: Deactivated successfully.
Nov 29 10:15:05 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.
Nov 29 10:15:05 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88-userdata-shm.mount: Deactivated successfully.
Nov 29 10:15:05 np0005539860 systemd[1]: var-lib-containers-storage-overlay-0a85fd2fefae0173eda694734ab4acb7c6b2bb30db1ec53399e7350c9d5bf913-merged.mount: Deactivated successfully.
Nov 29 10:15:05 np0005539860 podman[172496]: 2025-11-29 15:15:05.353873812 +0000 UTC m=+0.126299151 container cleanup 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:15:05 np0005539860 podman[172496]: multipathd
Nov 29 10:15:05 np0005539860 podman[172525]: multipathd
Nov 29 10:15:05 np0005539860 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Nov 29 10:15:05 np0005539860 systemd[1]: Stopped multipathd container.
Nov 29 10:15:05 np0005539860 systemd[1]: Starting multipathd container...
Nov 29 10:15:05 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:15:05 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85fd2fefae0173eda694734ab4acb7c6b2bb30db1ec53399e7350c9d5bf913/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 10:15:05 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0a85fd2fefae0173eda694734ab4acb7c6b2bb30db1ec53399e7350c9d5bf913/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 10:15:05 np0005539860 systemd[1]: virtsecretd.service: Deactivated successfully.
Nov 29 10:15:05 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.
Nov 29 10:15:05 np0005539860 podman[172538]: 2025-11-29 15:15:05.580557337 +0000 UTC m=+0.106565684 container init 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Nov 29 10:15:05 np0005539860 multipathd[172554]: + sudo -E kolla_set_configs
Nov 29 10:15:05 np0005539860 podman[172538]: 2025-11-29 15:15:05.610913852 +0000 UTC m=+0.136922189 container start 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 10:15:05 np0005539860 podman[172538]: multipathd
Nov 29 10:15:05 np0005539860 systemd[1]: Started multipathd container.
Nov 29 10:15:05 np0005539860 multipathd[172554]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:15:05 np0005539860 multipathd[172554]: INFO:__main__:Validating config file
Nov 29 10:15:05 np0005539860 multipathd[172554]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:15:05 np0005539860 multipathd[172554]: INFO:__main__:Writing out command to execute
Nov 29 10:15:05 np0005539860 multipathd[172554]: ++ cat /run_command
Nov 29 10:15:05 np0005539860 multipathd[172554]: + CMD='/usr/sbin/multipathd -d'
Nov 29 10:15:05 np0005539860 multipathd[172554]: + ARGS=
Nov 29 10:15:05 np0005539860 multipathd[172554]: + sudo kolla_copy_cacerts
Nov 29 10:15:05 np0005539860 podman[172561]: 2025-11-29 15:15:05.686083952 +0000 UTC m=+0.063124965 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 10:15:05 np0005539860 systemd[1]: 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88-64bd49f74a7fe21a.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:15:05 np0005539860 systemd[1]: 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88-64bd49f74a7fe21a.service: Failed with result 'exit-code'.
Nov 29 10:15:05 np0005539860 multipathd[172554]: + [[ ! -n '' ]]
Nov 29 10:15:05 np0005539860 multipathd[172554]: + . kolla_extend_start
Nov 29 10:15:05 np0005539860 multipathd[172554]: Running command: '/usr/sbin/multipathd -d'
Nov 29 10:15:05 np0005539860 multipathd[172554]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Nov 29 10:15:05 np0005539860 multipathd[172554]: + umask 0022
Nov 29 10:15:05 np0005539860 multipathd[172554]: + exec /usr/sbin/multipathd -d
Nov 29 10:15:05 np0005539860 multipathd[172554]: 3061.413608 | --------start up--------
Nov 29 10:15:05 np0005539860 multipathd[172554]: 3061.413630 | read /etc/multipath.conf
Nov 29 10:15:05 np0005539860 multipathd[172554]: 3061.419195 | path checkers start up
Nov 29 10:15:06 np0005539860 python3.9[172746]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:07 np0005539860 python3.9[172898]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Nov 29 10:15:08 np0005539860 python3.9[173050]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Nov 29 10:15:08 np0005539860 kernel: Key type psk registered
Nov 29 10:15:09 np0005539860 python3.9[173211]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:15:09 np0005539860 python3.9[173334]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429308.4696987-630-68120868270136/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:10 np0005539860 python3.9[173486]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:11 np0005539860 python3.9[173638]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:15:11 np0005539860 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Nov 29 10:15:11 np0005539860 systemd[1]: Stopped Load Kernel Modules.
Nov 29 10:15:11 np0005539860 systemd[1]: Stopping Load Kernel Modules...
Nov 29 10:15:11 np0005539860 systemd[1]: Starting Load Kernel Modules...
Nov 29 10:15:11 np0005539860 systemd[1]: Finished Load Kernel Modules.
Nov 29 10:15:12 np0005539860 python3.9[173794]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:15:14 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:14 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:14 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:15 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:15 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:15 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:15 np0005539860 systemd-logind[794]: Watching system buttons on /dev/input/event0 (Power Button)
Nov 29 10:15:15 np0005539860 systemd-logind[794]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Nov 29 10:15:15 np0005539860 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Nov 29 10:15:15 np0005539860 systemd[1]: Starting man-db-cache-update.service...
Nov 29 10:15:15 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:16 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:16 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:16 np0005539860 systemd[1]: Queuing reload/restart jobs for marked units…
Nov 29 10:15:17 np0005539860 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Nov 29 10:15:17 np0005539860 systemd[1]: Finished man-db-cache-update.service.
Nov 29 10:15:17 np0005539860 systemd[1]: man-db-cache-update.service: Consumed 1.658s CPU time.
Nov 29 10:15:17 np0005539860 systemd[1]: run-rc283210515b44938ac928dd8c35ec054.service: Deactivated successfully.
Nov 29 10:15:17 np0005539860 python3.9[175199]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:15:17 np0005539860 systemd[1]: Stopping Open-iSCSI...
Nov 29 10:15:17 np0005539860 iscsid[163613]: iscsid shutting down.
Nov 29 10:15:17 np0005539860 systemd[1]: iscsid.service: Deactivated successfully.
Nov 29 10:15:17 np0005539860 systemd[1]: Stopped Open-iSCSI.
Nov 29 10:15:17 np0005539860 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Nov 29 10:15:17 np0005539860 systemd[1]: Starting Open-iSCSI...
Nov 29 10:15:17 np0005539860 systemd[1]: Started Open-iSCSI.
Nov 29 10:15:18 np0005539860 python3.9[175404]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:15:19 np0005539860 python3.9[175560]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:20 np0005539860 python3.9[175712]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:15:20 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:20 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:20 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:21 np0005539860 python3.9[175897]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:15:21 np0005539860 network[175914]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:15:21 np0005539860 network[175915]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:15:21 np0005539860 network[175916]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:15:27 np0005539860 python3.9[176190]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:28 np0005539860 python3.9[176343]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:29 np0005539860 python3.9[176496]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:30 np0005539860 python3.9[176649]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:30 np0005539860 python3.9[176802]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:31 np0005539860 python3.9[176955]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:32 np0005539860 podman[176957]: 2025-11-29 15:15:32.02732908 +0000 UTC m=+0.107141991 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 10:15:32 np0005539860 python3.9[177133]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:33 np0005539860 podman[177286]: 2025-11-29 15:15:33.362145506 +0000 UTC m=+0.088160215 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 10:15:33 np0005539860 python3.9[177287]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:15:34 np0005539860 python3.9[177457]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:35 np0005539860 python3.9[177609]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:36 np0005539860 podman[177733]: 2025-11-29 15:15:36.097690369 +0000 UTC m=+0.090123519 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 10:15:36 np0005539860 python3.9[177781]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:37 np0005539860 python3.9[177933]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:37 np0005539860 python3.9[178085]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:38 np0005539860 python3.9[178237]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:39 np0005539860 python3.9[178389]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:39 np0005539860 python3.9[178541]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:40 np0005539860 python3.9[178693]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:41 np0005539860 python3.9[178845]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:42 np0005539860 python3.9[178997]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:42 np0005539860 python3.9[179149]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:43 np0005539860 python3.9[179301]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:44 np0005539860 python3.9[179453]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:45 np0005539860 python3.9[179605]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:45 np0005539860 python3.9[179757]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:15:47 np0005539860 python3.9[179909]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:48 np0005539860 python3.9[180061]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:15:49 np0005539860 python3.9[180213]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:15:49 np0005539860 systemd[1]: Reloading.
Nov 29 10:15:49 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:15:49 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:15:50 np0005539860 python3.9[180399]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:51 np0005539860 python3.9[180552]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:51 np0005539860 python3.9[180705]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:52 np0005539860 python3.9[180858]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:53 np0005539860 python3.9[181011]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:54 np0005539860 python3.9[181164]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:54 np0005539860 python3.9[181317]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:55 np0005539860 python3.9[181470]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:15:57 np0005539860 python3.9[181623]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:15:58 np0005539860 python3.9[181775]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:15:59 np0005539860 python3.9[181927]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:15:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:15:59.138 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:15:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:15:59.139 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:15:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:15:59.139 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:15:59 np0005539860 python3.9[182079]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:00 np0005539860 python3.9[182231]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:01 np0005539860 python3.9[182383]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:02 np0005539860 python3.9[182535]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:02 np0005539860 podman[182633]: 2025-11-29 15:16:02.662490456 +0000 UTC m=+0.107282451 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:16:02 np0005539860 python3.9[182714]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:03 np0005539860 podman[182866]: 2025-11-29 15:16:03.51424552 +0000 UTC m=+0.077469807 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Nov 29 10:16:03 np0005539860 python3.9[182867]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:04 np0005539860 python3.9[183038]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:06 np0005539860 podman[183063]: 2025-11-29 15:16:06.654878975 +0000 UTC m=+0.104969649 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 10:16:09 np0005539860 python3.9[183211]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Nov 29 10:16:10 np0005539860 python3.9[183364]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 10:16:11 np0005539860 python3.9[183522]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 10:16:12 np0005539860 systemd-logind[794]: New session 24 of user zuul.
Nov 29 10:16:12 np0005539860 systemd[1]: Started Session 24 of User zuul.
Nov 29 10:16:12 np0005539860 systemd[1]: session-24.scope: Deactivated successfully.
Nov 29 10:16:12 np0005539860 systemd-logind[794]: Session 24 logged out. Waiting for processes to exit.
Nov 29 10:16:12 np0005539860 systemd-logind[794]: Removed session 24.
Nov 29 10:16:13 np0005539860 python3.9[183708]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:14 np0005539860 python3.9[183829]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429372.8272488-1229-74609609276580/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:14 np0005539860 python3.9[183979]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:15 np0005539860 python3.9[184055]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:16 np0005539860 python3.9[184205]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:16 np0005539860 python3.9[184326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429375.6259625-1229-94089500313859/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:17 np0005539860 python3.9[184476]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:18 np0005539860 python3.9[184597]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429377.070694-1229-248381552298727/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:19 np0005539860 python3.9[184747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:19 np0005539860 python3.9[184868]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429378.5888205-1229-204132853219490/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:20 np0005539860 python3.9[185018]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:21 np0005539860 python3.9[185139]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429379.9037445-1229-165476573945230/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:21 np0005539860 python3.9[185291]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:16:22 np0005539860 python3.9[185443]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:16:23 np0005539860 python3.9[185595]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:24 np0005539860 python3.9[185747]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:24 np0005539860 python3.9[185870]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1764429383.719496-1336-22819981583030/.source _original_basename=.th_0eh0v follow=False checksum=6592debed23f03cab64d8dc5b894ce6868ead47a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Nov 29 10:16:25 np0005539860 python3.9[186022]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:26 np0005539860 python3.9[186174]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:27 np0005539860 python3.9[186295]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429386.1677563-1362-13736432247619/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:27 np0005539860 python3.9[186445]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:16:28 np0005539860 python3.9[186566]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429387.4198089-1377-257474418223466/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:16:29 np0005539860 python3.9[186718]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Nov 29 10:16:30 np0005539860 python3.9[186870]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:16:31 np0005539860 python3[187022]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:16:31 np0005539860 podman[187056]: 2025-11-29 15:16:31.399948645 +0000 UTC m=+0.059044893 container create 73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 10:16:31 np0005539860 podman[187056]: 2025-11-29 15:16:31.365716576 +0000 UTC m=+0.024812824 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 10:16:31 np0005539860 python3[187022]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Nov 29 10:16:32 np0005539860 python3.9[187246]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:33 np0005539860 podman[187372]: 2025-11-29 15:16:33.029180831 +0000 UTC m=+0.112584859 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 10:16:33 np0005539860 python3.9[187417]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Nov 29 10:16:33 np0005539860 podman[187551]: 2025-11-29 15:16:33.860341194 +0000 UTC m=+0.071772608 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 10:16:34 np0005539860 python3.9[187597]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:16:34 np0005539860 python3[187749]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:16:35 np0005539860 podman[187786]: 2025-11-29 15:16:35.186946047 +0000 UTC m=+0.055221302 container create 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=edpm)
Nov 29 10:16:35 np0005539860 podman[187786]: 2025-11-29 15:16:35.150184321 +0000 UTC m=+0.018459596 image pull b65793e7266422f5b94c32d109b906c8ffd974cf2ddf0b6929e463e29e05864a quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Nov 29 10:16:35 np0005539860 python3[187749]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Nov 29 10:16:35 np0005539860 python3.9[187976]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:36 np0005539860 podman[188102]: 2025-11-29 15:16:36.835586844 +0000 UTC m=+0.051555376 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd)
Nov 29 10:16:37 np0005539860 python3.9[188150]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:16:37 np0005539860 python3.9[188301]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429397.1079128-1469-14704228396730/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:16:38 np0005539860 python3.9[188377]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:16:38 np0005539860 systemd[1]: Reloading.
Nov 29 10:16:38 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:16:38 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:16:39 np0005539860 python3.9[188488]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:16:39 np0005539860 systemd[1]: Reloading.
Nov 29 10:16:39 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:16:39 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:16:39 np0005539860 systemd[1]: Starting nova_compute container...
Nov 29 10:16:39 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:16:39 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:39 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:39 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:39 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:39 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:39 np0005539860 podman[188529]: 2025-11-29 15:16:39.904103344 +0000 UTC m=+0.129819883 container init 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, container_name=nova_compute, org.label-schema.schema-version=1.0)
Nov 29 10:16:39 np0005539860 podman[188529]: 2025-11-29 15:16:39.91844996 +0000 UTC m=+0.144166489 container start 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:16:39 np0005539860 podman[188529]: nova_compute
Nov 29 10:16:39 np0005539860 systemd[1]: Started nova_compute container.
Nov 29 10:16:39 np0005539860 nova_compute[188544]: + sudo -E kolla_set_configs
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Validating config file
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying service configuration files
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Deleting /etc/ceph
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Creating directory /etc/ceph
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Writing out command to execute
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:40 np0005539860 nova_compute[188544]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 10:16:40 np0005539860 nova_compute[188544]: ++ cat /run_command
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + CMD=nova-compute
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + ARGS=
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + sudo kolla_copy_cacerts
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + [[ ! -n '' ]]
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + . kolla_extend_start
Nov 29 10:16:40 np0005539860 nova_compute[188544]: Running command: 'nova-compute'
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + umask 0022
Nov 29 10:16:40 np0005539860 nova_compute[188544]: + exec nova-compute
Nov 29 10:16:41 np0005539860 python3.9[188706]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:41 np0005539860 python3.9[188856]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:41 np0005539860 nova_compute[188544]: 2025-11-29 15:16:41.942 188548 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 10:16:41 np0005539860 nova_compute[188544]: 2025-11-29 15:16:41.942 188548 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 10:16:41 np0005539860 nova_compute[188544]: 2025-11-29 15:16:41.942 188548 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 10:16:41 np0005539860 nova_compute[188544]: 2025-11-29 15:16:41.942 188548 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.069 188548 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.091 188548 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.022s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.091 188548 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.734 188548 INFO nova.virt.driver [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 10:16:42 np0005539860 python3.9[189010]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.841 188548 INFO nova.compute.provider_config [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.866 188548 DEBUG oslo_concurrency.lockutils [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.867 188548 DEBUG oslo_concurrency.lockutils [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.867 188548 DEBUG oslo_concurrency.lockutils [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.867 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.868 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.868 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.868 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.868 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.868 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.869 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.869 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.869 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.869 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.869 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.869 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.870 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.870 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.870 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.870 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.870 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.871 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.871 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.871 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.871 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.871 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.871 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.872 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.872 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.872 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.872 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.872 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.873 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.873 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.873 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.873 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.873 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.874 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.874 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.874 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.874 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.874 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.875 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.875 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.875 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.875 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.875 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.875 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.876 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.876 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.876 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.876 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.876 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.877 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.877 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.877 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.877 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.877 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.877 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.878 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.878 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.878 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.878 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.878 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.878 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.879 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.879 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.879 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.879 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.879 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.880 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.880 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.880 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.880 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.880 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.880 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.881 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.881 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.881 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.881 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.881 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.881 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.882 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.882 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.882 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.882 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.882 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.883 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.883 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.883 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.883 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.883 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.883 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.884 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.884 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.884 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.884 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.884 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.885 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.885 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.885 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.885 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.885 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.885 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.886 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.886 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.886 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.886 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.886 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.887 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.887 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.887 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.887 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.887 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.887 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.888 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.888 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.888 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.888 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.888 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.888 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.889 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.889 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.889 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.889 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.889 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.890 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.890 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.890 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.890 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.890 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.890 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.891 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.891 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.891 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.891 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.891 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.891 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.892 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.892 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.892 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.892 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.892 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.893 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.893 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.893 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.893 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.893 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.893 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.894 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.894 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.894 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.894 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.894 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.895 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.895 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.895 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.895 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.895 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.895 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.896 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.896 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.896 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.896 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.896 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.897 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.897 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.897 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.897 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.897 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.897 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.898 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.898 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.898 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.898 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.898 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.899 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.899 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.899 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.899 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.899 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.900 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.900 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.900 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.900 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.900 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.900 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.901 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.901 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.901 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.901 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.901 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.902 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.902 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.902 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.902 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.902 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.902 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.903 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.903 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.903 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.903 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.903 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.903 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.904 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.904 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.904 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.904 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.904 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.905 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.905 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.905 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.905 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.905 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.906 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.906 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.906 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.906 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.906 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.906 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.907 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.907 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.907 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.907 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.907 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.908 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.908 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.908 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.908 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.908 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.909 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.909 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.909 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.909 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.909 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.909 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.910 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.910 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.910 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.910 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.910 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.911 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.911 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.911 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.911 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.911 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.912 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.912 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.912 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.912 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.912 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.912 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.913 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.913 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.913 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.913 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.913 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.914 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.914 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.914 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.914 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.914 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.914 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.915 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.915 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.915 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.915 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.915 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.916 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.916 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.916 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.916 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.916 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.917 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.917 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.917 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.917 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.917 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.917 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.918 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.918 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.918 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.918 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.918 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.919 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.919 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.919 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.919 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.919 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.919 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.920 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.920 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.920 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.920 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.920 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.921 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.921 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.921 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.921 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.921 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.922 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.922 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.922 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.922 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.922 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.923 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.923 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.923 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.923 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.923 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.923 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.924 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.924 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.924 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.924 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.924 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.925 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.925 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.925 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.925 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.925 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.925 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.926 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.926 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.926 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.926 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.926 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.927 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.927 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.927 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.927 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.927 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.928 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.928 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.928 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.928 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.928 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.929 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.929 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.929 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.929 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.929 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.929 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.930 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.930 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.930 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.930 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.930 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.931 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.931 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.931 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.931 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.931 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.932 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.932 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.932 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.932 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.932 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.933 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.933 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.933 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.934 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.934 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.934 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.934 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.934 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.935 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.935 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.935 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.935 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.935 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.935 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.936 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.936 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.936 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.936 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.936 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.937 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.937 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.937 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.937 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.937 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.938 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.938 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.938 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.938 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.938 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.939 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.939 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.939 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.939 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.940 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.940 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.940 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.940 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.940 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.941 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.941 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.941 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.941 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.941 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.942 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.942 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.942 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.942 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.942 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.943 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.943 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.943 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.943 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.943 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.944 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.944 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.944 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.944 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.944 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.945 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.945 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.945 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.945 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.945 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.946 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.946 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.946 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.946 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.947 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.947 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.947 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.947 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.948 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.948 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.948 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.948 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.948 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.949 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.949 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.949 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.949 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.949 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.950 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.950 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.950 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.950 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.951 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.951 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.951 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.951 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.951 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.952 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.952 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.952 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.952 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.952 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.953 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.953 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.953 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.953 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.953 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.954 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.954 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.954 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.954 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.955 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.955 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.955 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.955 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.956 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.956 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.956 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.956 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.956 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.957 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.957 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.957 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.957 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.958 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.958 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.958 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.958 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.959 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.959 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.959 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.959 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.959 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.960 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.960 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.960 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.960 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.960 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.961 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.961 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.961 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.961 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.962 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.962 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.962 188548 WARNING oslo_config.cfg [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 10:16:42 np0005539860 nova_compute[188544]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 10:16:42 np0005539860 nova_compute[188544]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 10:16:42 np0005539860 nova_compute[188544]: and ``live_migration_inbound_addr`` respectively.
Nov 29 10:16:42 np0005539860 nova_compute[188544]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.962 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.963 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.963 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.963 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.963 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.963 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.964 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.964 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.964 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.964 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.965 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.965 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.965 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.965 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.966 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.966 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.966 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.966 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.966 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.967 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.967 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.967 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.967 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.967 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.968 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.968 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.968 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.968 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.969 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.969 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.969 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.969 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.970 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.970 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.970 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.970 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.970 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.970 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.971 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.971 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.971 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.971 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.971 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.972 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.972 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.972 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.972 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.972 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.972 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.973 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.973 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.973 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.973 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.973 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.974 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.974 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.974 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.974 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.974 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.974 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.975 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.975 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.975 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.975 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.975 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.976 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.976 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.976 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.976 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.976 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.976 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.977 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.977 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.977 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.977 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.978 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.978 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.978 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.978 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.979 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.979 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.979 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.979 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.980 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.980 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.980 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.980 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.981 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.981 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.981 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.981 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.981 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.981 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.982 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.982 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.982 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.982 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.982 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.983 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.983 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.983 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.983 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.983 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.984 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.984 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.984 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.984 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.985 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.985 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.985 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.985 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.986 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.986 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.986 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.986 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.986 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.987 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.987 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.987 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.987 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.988 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.988 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.988 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.988 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.988 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.988 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.989 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.989 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.989 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.989 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.990 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.990 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.990 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.990 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.990 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.991 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.991 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.991 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.991 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.992 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.992 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.992 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.993 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.993 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.993 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.993 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.993 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.994 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.994 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.994 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.994 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.995 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.995 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.995 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.995 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.996 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.996 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.996 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.996 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.997 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.997 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.997 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.997 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.997 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.998 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.998 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.998 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.998 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.998 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.999 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.999 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.999 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.999 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:42 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.999 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:42.999 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.000 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.000 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.000 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.000 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.000 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.001 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.001 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.001 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.001 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.001 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.002 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.002 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.002 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.002 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.002 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.003 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.003 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.003 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.003 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.003 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.004 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.004 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.004 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.004 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.004 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.005 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.005 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.005 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.005 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.005 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.006 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.006 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.006 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.006 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.006 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.007 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.007 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.007 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.007 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.008 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.008 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.008 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.008 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.009 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.009 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.009 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.009 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.009 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.009 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.010 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.010 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.010 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.010 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.010 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.011 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.011 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.011 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.011 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.011 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.011 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.012 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.012 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.012 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.012 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.012 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.013 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.013 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.013 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.013 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.014 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.014 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.014 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.014 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.015 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.015 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.015 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.015 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.016 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.016 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.016 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.016 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.017 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.017 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.017 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.017 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.017 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.017 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.018 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.018 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.018 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.018 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.018 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.019 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.019 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.019 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.019 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.019 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.019 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.020 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.020 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.020 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.020 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.020 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.021 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.021 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.021 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.021 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.021 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.021 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.022 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.022 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.022 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.022 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.022 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.023 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.023 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.023 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.023 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.023 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.024 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.024 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.024 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.024 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.024 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.025 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.025 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.025 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.025 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.025 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.026 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.026 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.026 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.026 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.026 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.027 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.027 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.027 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.027 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.027 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.028 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.028 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.028 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.028 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.028 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.028 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.029 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.029 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.029 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.029 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.029 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.030 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.030 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.030 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.030 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.030 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.031 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.031 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.031 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.031 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.031 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.031 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.032 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.032 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.032 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.032 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.032 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.033 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.033 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.033 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.033 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.033 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.034 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.034 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.034 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.034 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.034 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.035 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.035 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.035 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.035 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.035 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.036 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.036 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.036 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.036 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.036 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.036 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.037 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.037 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.037 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.037 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.037 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.038 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.038 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.038 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.038 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.038 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.039 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.039 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.039 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.039 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.039 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.039 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.040 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.040 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.040 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.040 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.041 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.041 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.041 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.041 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.041 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.042 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.042 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.042 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.042 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.042 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.042 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.043 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.043 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.043 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.043 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.043 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.044 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.044 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.044 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.044 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.044 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.044 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.045 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.045 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.045 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.045 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.045 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.046 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.046 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.046 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.046 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.046 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.046 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.047 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.047 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.047 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.047 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.047 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.048 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.048 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.048 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.048 188548 DEBUG oslo_service.service [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.049 188548 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.068 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.069 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.069 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.070 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 10:16:43 np0005539860 systemd[1]: Starting libvirt QEMU daemon...
Nov 29 10:16:43 np0005539860 systemd[1]: Started libvirt QEMU daemon.
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.156 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe5e599c790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.159 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe5e599c790> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.160 188548 INFO nova.virt.libvirt.driver [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.176 188548 WARNING nova.virt.libvirt.driver [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 10:16:43 np0005539860 nova_compute[188544]: 2025-11-29 15:16:43.177 188548 DEBUG nova.virt.libvirt.volume.mount [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 10:16:43 np0005539860 python3.9[189222]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 10:16:44 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 10:16:44 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.104 188548 INFO nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <host>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <uuid>0615934f-a8e3-4c06-8053-42a9c2c49d13</uuid>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <arch>x86_64</arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model>EPYC-Rome-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <vendor>AMD</vendor>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <microcode version='16777317'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <signature family='23' model='49' stepping='0'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='x2apic'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='tsc-deadline'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='osxsave'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='hypervisor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='tsc_adjust'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='spec-ctrl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='stibp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='arch-capabilities'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='cmp_legacy'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='topoext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='virt-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='lbrv'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='tsc-scale'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='vmcb-clean'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='pause-filter'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='pfthreshold'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='svme-addr-chk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='rdctl-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='mds-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature name='pschange-mc-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <pages unit='KiB' size='4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <pages unit='KiB' size='2048'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <pages unit='KiB' size='1048576'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <power_management>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <suspend_mem/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <suspend_disk/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <suspend_hybrid/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </power_management>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <iommu support='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <migration_features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <live/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <uri_transports>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <uri_transport>tcp</uri_transport>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <uri_transport>rdma</uri_transport>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </uri_transports>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </migration_features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <topology>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <cells num='1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <cell id='0'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          <memory unit='KiB'>7864316</memory>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          <distances>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <sibling id='0' value='10'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          </distances>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          <cpus num='8'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:          </cpus>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        </cell>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </cells>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </topology>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <cache>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </cache>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <secmodel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model>selinux</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <doi>0</doi>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </secmodel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <secmodel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model>dac</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <doi>0</doi>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </secmodel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </host>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <guest>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <os_type>hvm</os_type>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <arch name='i686'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <wordsize>32</wordsize>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <domain type='qemu'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <domain type='kvm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <pae/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <nonpae/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <acpi default='on' toggle='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <apic default='on' toggle='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <cpuselection/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <deviceboot/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <disksnapshot default='on' toggle='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <externalSnapshot/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </guest>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <guest>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <os_type>hvm</os_type>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <arch name='x86_64'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <wordsize>64</wordsize>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <domain type='qemu'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <domain type='kvm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <acpi default='on' toggle='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <apic default='on' toggle='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <cpuselection/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <deviceboot/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <disksnapshot default='on' toggle='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <externalSnapshot/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </guest>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 
Nov 29 10:16:44 np0005539860 nova_compute[188544]: </capabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: #033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.110 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.130 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 10:16:44 np0005539860 nova_compute[188544]: <domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <domain>kvm</domain>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <arch>i686</arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <vcpu max='4096'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <iothreads supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <os supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='firmware'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <loader supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>rom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pflash</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='readonly'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>yes</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='secure'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </loader>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </os>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='maximumMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <vendor>AMD</vendor>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='succor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='custom' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-128'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-256'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-512'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <memoryBacking supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='sourceType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>anonymous</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>memfd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </memoryBacking>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <disk supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='diskDevice'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>disk</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cdrom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>floppy</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>lun</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>fdc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>sata</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </disk>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <graphics supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vnc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egl-headless</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </graphics>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <video supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='modelType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vga</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cirrus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>none</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>bochs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ramfb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </video>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hostdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='mode'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>subsystem</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='startupPolicy'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>mandatory</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>requisite</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>optional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='subsysType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pci</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='capsType'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='pciBackend'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hostdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <rng supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>random</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </rng>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <filesystem supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='driverType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>path</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>handle</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtiofs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </filesystem>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <tpm supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-tis</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-crb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emulator</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>external</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendVersion'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>2.0</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </tpm>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <redirdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </redirdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <channel supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </channel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <crypto supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </crypto>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <interface supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>passt</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </interface>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <panic supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>isa</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>hyperv</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </panic>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <console supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>null</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dev</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pipe</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stdio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>udp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tcp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu-vdagent</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </console>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <gic supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <genid supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backup supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <async-teardown supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <ps2 supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sev supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sgx supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hyperv supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='features'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>relaxed</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vapic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>spinlocks</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vpindex</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>runtime</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>synic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stimer</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reset</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vendor_id</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>frequencies</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reenlightenment</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tlbflush</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ipi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>avic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emsr_bitmap</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>xmm_input</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hyperv>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <launchSecurity supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='sectype'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tdx</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </launchSecurity>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: </domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.140 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 10:16:44 np0005539860 nova_compute[188544]: <domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <domain>kvm</domain>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <arch>i686</arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <vcpu max='240'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <iothreads supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <os supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='firmware'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <loader supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>rom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pflash</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='readonly'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>yes</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='secure'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </loader>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </os>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='maximumMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <vendor>AMD</vendor>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='succor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='custom' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-128'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-256'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-512'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <memoryBacking supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='sourceType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>anonymous</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>memfd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </memoryBacking>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <disk supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='diskDevice'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>disk</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cdrom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>floppy</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>lun</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ide</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>fdc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>sata</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </disk>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <graphics supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vnc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egl-headless</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </graphics>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <video supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='modelType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vga</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cirrus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>none</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>bochs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ramfb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </video>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hostdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='mode'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>subsystem</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='startupPolicy'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>mandatory</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>requisite</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>optional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='subsysType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pci</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='capsType'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='pciBackend'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hostdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <rng supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>random</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </rng>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <filesystem supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='driverType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>path</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>handle</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtiofs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </filesystem>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <tpm supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-tis</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-crb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emulator</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>external</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendVersion'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>2.0</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </tpm>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <redirdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </redirdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <channel supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </channel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <crypto supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </crypto>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <interface supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>passt</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </interface>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <panic supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>isa</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>hyperv</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </panic>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <console supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>null</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dev</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pipe</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stdio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>udp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tcp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu-vdagent</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </console>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <gic supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <genid supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backup supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <async-teardown supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <ps2 supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sev supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sgx supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hyperv supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='features'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>relaxed</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vapic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>spinlocks</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vpindex</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>runtime</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>synic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stimer</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reset</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vendor_id</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>frequencies</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reenlightenment</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tlbflush</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ipi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>avic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emsr_bitmap</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>xmm_input</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hyperv>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <launchSecurity supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='sectype'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tdx</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </launchSecurity>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: </domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.173 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.177 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 10:16:44 np0005539860 nova_compute[188544]: <domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <domain>kvm</domain>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <arch>x86_64</arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <vcpu max='4096'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <iothreads supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <os supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='firmware'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>efi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <loader supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>rom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pflash</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='readonly'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>yes</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='secure'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>yes</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </loader>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </os>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='maximumMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <vendor>AMD</vendor>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='succor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='custom' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-128'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-256'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-512'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <memoryBacking supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='sourceType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>anonymous</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>memfd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </memoryBacking>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <disk supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='diskDevice'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>disk</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cdrom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>floppy</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>lun</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>fdc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>sata</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </disk>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <graphics supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vnc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egl-headless</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </graphics>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <video supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='modelType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vga</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cirrus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>none</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>bochs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ramfb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </video>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hostdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='mode'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>subsystem</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='startupPolicy'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>mandatory</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>requisite</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>optional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='subsysType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pci</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='capsType'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='pciBackend'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hostdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <rng supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>random</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </rng>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <filesystem supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='driverType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>path</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>handle</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtiofs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </filesystem>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <tpm supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-tis</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-crb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emulator</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>external</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendVersion'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>2.0</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </tpm>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <redirdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </redirdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <channel supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </channel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <crypto supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </crypto>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <interface supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>passt</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </interface>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <panic supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>isa</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>hyperv</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </panic>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <console supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>null</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dev</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pipe</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stdio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>udp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tcp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu-vdagent</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </console>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <gic supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <genid supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backup supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <async-teardown supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <ps2 supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sev supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sgx supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hyperv supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='features'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>relaxed</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vapic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>spinlocks</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vpindex</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>runtime</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>synic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stimer</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reset</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vendor_id</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>frequencies</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reenlightenment</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tlbflush</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ipi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>avic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emsr_bitmap</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>xmm_input</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hyperv>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <launchSecurity supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='sectype'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tdx</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </launchSecurity>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: </domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.235 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 10:16:44 np0005539860 nova_compute[188544]: <domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <domain>kvm</domain>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <arch>x86_64</arch>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <vcpu max='240'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <iothreads supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <os supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='firmware'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <loader supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>rom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pflash</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='readonly'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>yes</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='secure'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>no</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </loader>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </os>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='maximumMigratable'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>on</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>off</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <vendor>AMD</vendor>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='succor'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <mode name='custom' supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Denverton-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='auto-ibrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amd-psfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='stibp-always-on'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='EPYC-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-128'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-256'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx10-512'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='prefetchiti'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Haswell-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512er'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512pf'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fma4'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tbm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xop'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='amx-tile'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-bf16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-fp16'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bitalg'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrc'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fzrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='la57'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='taa-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xfd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ifma'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cmpccxadd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fbsdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='fsrs'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ibrs-all'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mcdt-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pbrsb-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='psdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='serialize'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vaes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='hle'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='rtm'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512bw'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512cd'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512dq'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512f'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='avx512vl'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='invpcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pcid'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='pku'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='mpx'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='core-capability'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='split-lock-detect'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='cldemote'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='erms'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='gfni'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdir64b'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='movdiri'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='xsaves'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='athlon-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='core2duo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='coreduo-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='n270-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='ss'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <blockers model='phenom-v1'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnow'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <feature name='3dnowext'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </blockers>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </mode>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </cpu>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <memoryBacking supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <enum name='sourceType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>anonymous</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <value>memfd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </memoryBacking>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <disk supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='diskDevice'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>disk</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cdrom</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>floppy</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>lun</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ide</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>fdc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>sata</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </disk>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <graphics supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vnc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egl-headless</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </graphics>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <video supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='modelType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vga</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>cirrus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>none</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>bochs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ramfb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </video>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hostdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='mode'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>subsystem</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='startupPolicy'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>mandatory</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>requisite</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>optional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='subsysType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pci</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>scsi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='capsType'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='pciBackend'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hostdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <rng supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtio-non-transitional</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>random</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>egd</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </rng>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <filesystem supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='driverType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>path</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>handle</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>virtiofs</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </filesystem>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <tpm supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-tis</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tpm-crb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emulator</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>external</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendVersion'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>2.0</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </tpm>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <redirdev supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='bus'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>usb</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </redirdev>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <channel supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </channel>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <crypto supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendModel'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>builtin</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </crypto>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <interface supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='backendType'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>default</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>passt</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </interface>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <panic supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='model'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>isa</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>hyperv</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </panic>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <console supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='type'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>null</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vc</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pty</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dev</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>file</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>pipe</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stdio</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>udp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tcp</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>unix</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>qemu-vdagent</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>dbus</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </console>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </devices>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  <features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <gic supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <genid supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <backup supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <async-teardown supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <ps2 supported='yes'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sev supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <sgx supported='no'/>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <hyperv supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='features'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>relaxed</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vapic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>spinlocks</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vpindex</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>runtime</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>synic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>stimer</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reset</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>vendor_id</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>frequencies</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>reenlightenment</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tlbflush</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>ipi</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>avic</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>emsr_bitmap</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>xmm_input</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </defaults>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </hyperv>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    <launchSecurity supported='yes'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      <enum name='sectype'>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:        <value>tdx</value>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:      </enum>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:    </launchSecurity>
Nov 29 10:16:44 np0005539860 nova_compute[188544]:  </features>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: </domainCapabilities>
Nov 29 10:16:44 np0005539860 nova_compute[188544]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.310 188548 DEBUG nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.311 188548 INFO nova.virt.libvirt.host [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Secure Boot support detected#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.313 188548 INFO nova.virt.libvirt.driver [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.313 188548 INFO nova.virt.libvirt.driver [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.329 188548 DEBUG nova.virt.libvirt.driver [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.378 188548 INFO nova.virt.node [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Determined node identity 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from /var/lib/nova/compute_id#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.396 188548 WARNING nova.compute.manager [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Compute nodes ['4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.425 188548 INFO nova.compute.manager [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.458 188548 WARNING nova.compute.manager [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.459 188548 DEBUG oslo_concurrency.lockutils [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.459 188548 DEBUG oslo_concurrency.lockutils [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.460 188548 DEBUG oslo_concurrency.lockutils [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.460 188548 DEBUG nova.compute.resource_tracker [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:16:44 np0005539860 systemd[1]: Starting libvirt nodedev daemon...
Nov 29 10:16:44 np0005539860 systemd[1]: Started libvirt nodedev daemon.
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.798 188548 WARNING nova.virt.libvirt.driver [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.799 188548 DEBUG nova.compute.resource_tracker [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6056MB free_disk=72.6112289428711GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.799 188548 DEBUG oslo_concurrency.lockutils [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.799 188548 DEBUG oslo_concurrency.lockutils [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.812 188548 WARNING nova.compute.resource_tracker [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] No compute node record for compute-0.ctlplane.example.com:4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd could not be found.#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.832 188548 INFO nova.compute.resource_tracker [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.921 188548 DEBUG nova.compute.resource_tracker [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:16:44 np0005539860 nova_compute[188544]: 2025-11-29 15:16:44.921 188548 DEBUG nova.compute.resource_tracker [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:16:45 np0005539860 python3.9[189421]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:16:45 np0005539860 systemd[1]: Stopping nova_compute container...
Nov 29 10:16:45 np0005539860 nova_compute[188544]: 2025-11-29 15:16:45.263 188548 DEBUG oslo_concurrency.lockutils [None req-a76a33e3-16f2-4655-b577-9ca7a78be85d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.464s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:16:45 np0005539860 nova_compute[188544]: 2025-11-29 15:16:45.264 188548 DEBUG oslo_concurrency.lockutils [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 10:16:45 np0005539860 nova_compute[188544]: 2025-11-29 15:16:45.265 188548 DEBUG oslo_concurrency.lockutils [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 10:16:45 np0005539860 nova_compute[188544]: 2025-11-29 15:16:45.265 188548 DEBUG oslo_concurrency.lockutils [None req-1bbab457-f4b3-4683-8437-f30262ed88d9 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 10:16:45 np0005539860 virtqemud[189062]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Nov 29 10:16:45 np0005539860 virtqemud[189062]: hostname: compute-0
Nov 29 10:16:45 np0005539860 virtqemud[189062]: End of file while reading data: Input/output error
Nov 29 10:16:45 np0005539860 systemd[1]: libpod-116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2.scope: Deactivated successfully.
Nov 29 10:16:45 np0005539860 systemd[1]: libpod-116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2.scope: Consumed 3.117s CPU time.
Nov 29 10:16:45 np0005539860 podman[189425]: 2025-11-29 15:16:45.664142407 +0000 UTC m=+0.464849487 container died 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=nova_compute, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 10:16:45 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2-userdata-shm.mount: Deactivated successfully.
Nov 29 10:16:45 np0005539860 systemd[1]: var-lib-containers-storage-overlay-08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40-merged.mount: Deactivated successfully.
Nov 29 10:16:45 np0005539860 podman[189425]: 2025-11-29 15:16:45.731862177 +0000 UTC m=+0.532569227 container cleanup 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 10:16:45 np0005539860 podman[189425]: nova_compute
Nov 29 10:16:45 np0005539860 podman[189456]: nova_compute
Nov 29 10:16:45 np0005539860 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Nov 29 10:16:45 np0005539860 systemd[1]: Stopped nova_compute container.
Nov 29 10:16:45 np0005539860 systemd[1]: Starting nova_compute container...
Nov 29 10:16:45 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:16:46 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:46 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:46 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:46 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:46 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/08893bf227a0fc315dc02ad9c3c6f1ef7ffb8c6c49bd5b07d1ddf4e99d4c9e40/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:46 np0005539860 podman[189469]: 2025-11-29 15:16:46.202585648 +0000 UTC m=+0.354873908 container init 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 10:16:46 np0005539860 podman[189469]: 2025-11-29 15:16:46.213806743 +0000 UTC m=+0.366095003 container start 116eccbb0ec803ea138a5ef6bbb779e694e226d7509629923f44799185cdd2d2 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 10:16:46 np0005539860 podman[189469]: nova_compute
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + sudo -E kolla_set_configs
Nov 29 10:16:46 np0005539860 systemd[1]: Started nova_compute container.
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Validating config file
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying service configuration files
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /etc/nova/nova.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /etc/ceph
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Creating directory /etc/ceph
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /etc/ceph
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Writing out command to execute
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:46 np0005539860 nova_compute[189485]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Nov 29 10:16:46 np0005539860 nova_compute[189485]: ++ cat /run_command
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + CMD=nova-compute
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + ARGS=
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + sudo kolla_copy_cacerts
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + [[ ! -n '' ]]
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + . kolla_extend_start
Nov 29 10:16:46 np0005539860 nova_compute[189485]: Running command: 'nova-compute'
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + echo 'Running command: '\''nova-compute'\'''
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + umask 0022
Nov 29 10:16:46 np0005539860 nova_compute[189485]: + exec nova-compute
Nov 29 10:16:47 np0005539860 python3.9[189648]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Nov 29 10:16:47 np0005539860 systemd[1]: Started libpod-conmon-73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384.scope.
Nov 29 10:16:47 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:16:47 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7276f40bcdd9a5fd8ffc9f3878645c19be311ffb1f5f9b57a2415bef45a27c4/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:47 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7276f40bcdd9a5fd8ffc9f3878645c19be311ffb1f5f9b57a2415bef45a27c4/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:47 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a7276f40bcdd9a5fd8ffc9f3878645c19be311ffb1f5f9b57a2415bef45a27c4/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Nov 29 10:16:47 np0005539860 podman[189674]: 2025-11-29 15:16:47.439005721 +0000 UTC m=+0.132950225 container init 73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:16:47 np0005539860 podman[189674]: 2025-11-29 15:16:47.448758467 +0000 UTC m=+0.142702891 container start 73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 10:16:47 np0005539860 python3.9[189648]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Applying nova statedir ownership
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Nov 29 10:16:47 np0005539860 nova_compute_init[189695]: INFO:nova_statedir:Nova statedir ownership complete
Nov 29 10:16:47 np0005539860 systemd[1]: libpod-73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384.scope: Deactivated successfully.
Nov 29 10:16:47 np0005539860 podman[189697]: 2025-11-29 15:16:47.550905081 +0000 UTC m=+0.048373472 container died 73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 10:16:47 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384-userdata-shm.mount: Deactivated successfully.
Nov 29 10:16:47 np0005539860 systemd[1]: var-lib-containers-storage-overlay-a7276f40bcdd9a5fd8ffc9f3878645c19be311ffb1f5f9b57a2415bef45a27c4-merged.mount: Deactivated successfully.
Nov 29 10:16:47 np0005539860 podman[189703]: 2025-11-29 15:16:47.596970411 +0000 UTC m=+0.063050088 container cleanup 73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=nova_compute_init, managed_by=edpm_ansible, io.buildah.version=1.41.3)
Nov 29 10:16:47 np0005539860 systemd[1]: libpod-conmon-73fba104844d85427a5b3df0eae756db775ec2102612d8b4b042b196bf611384.scope: Deactivated successfully.
Nov 29 10:16:48 np0005539860 systemd[1]: session-23.scope: Deactivated successfully.
Nov 29 10:16:48 np0005539860 systemd[1]: session-23.scope: Consumed 2min 6.020s CPU time.
Nov 29 10:16:48 np0005539860 systemd-logind[794]: Session 23 logged out. Waiting for processes to exit.
Nov 29 10:16:48 np0005539860 systemd-logind[794]: Removed session 23.
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.316 189489 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.316 189489 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.317 189489 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.317 189489 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.443 189489 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.472 189489 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.473 189489 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Nov 29 10:16:48 np0005539860 nova_compute[189485]: 2025-11-29 15:16:48.932 189489 INFO nova.virt.driver [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.047 189489 INFO nova.compute.provider_config [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.076 189489 DEBUG oslo_concurrency.lockutils [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.076 189489 DEBUG oslo_concurrency.lockutils [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.076 189489 DEBUG oslo_concurrency.lockutils [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.077 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.078 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.079 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.080 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.080 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.080 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.080 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.080 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.080 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.081 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.082 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.082 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.082 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.082 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.082 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.082 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.083 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.084 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.084 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.084 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.084 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.084 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.084 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.085 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.086 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.087 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.088 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.089 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.090 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.091 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.092 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.093 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.094 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.095 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.096 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.097 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.098 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.098 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.098 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.098 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.098 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.098 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.099 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.100 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.101 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.102 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.103 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.104 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.105 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.106 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.107 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.107 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.107 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.107 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.107 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.107 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.108 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.109 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.110 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.111 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.112 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.113 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.114 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.115 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.116 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.117 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.118 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.119 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.120 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.121 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.122 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.123 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.124 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.125 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.126 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.126 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.126 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.126 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.126 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.126 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.127 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.127 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.127 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.127 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.127 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.128 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.129 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.130 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.130 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.130 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.130 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.130 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.131 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.131 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.132 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.132 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.132 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.133 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.133 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.133 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.134 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.134 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.134 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.135 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.135 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.135 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.135 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.136 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.137 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.138 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.139 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.140 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.141 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.142 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.143 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.144 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.145 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.146 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.147 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.148 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.149 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.149 189489 WARNING oslo_config.cfg [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Nov 29 10:16:49 np0005539860 nova_compute[189485]: live_migration_uri is deprecated for removal in favor of two other options that
Nov 29 10:16:49 np0005539860 nova_compute[189485]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Nov 29 10:16:49 np0005539860 nova_compute[189485]: and ``live_migration_inbound_addr`` respectively.
Nov 29 10:16:49 np0005539860 nova_compute[189485]: ).  Its value may be silently ignored in the future.#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.149 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.149 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.149 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.149 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.150 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.151 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.152 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.153 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.154 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.155 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.156 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.157 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.158 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.159 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.160 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.161 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.162 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.163 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.164 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.165 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.166 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.167 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.168 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.169 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.170 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.170 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.170 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.170 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.170 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.170 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.171 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.172 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.173 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.174 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.174 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.174 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.174 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.174 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.174 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.175 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.176 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.177 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.177 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.177 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.177 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.177 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.177 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.178 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.179 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.180 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.181 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.182 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.183 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.183 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.183 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.183 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.183 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.183 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.184 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.184 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.184 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.184 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.185 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.185 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.185 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.186 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.186 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.186 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.187 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.187 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.187 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.187 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.187 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.188 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.188 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.188 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.188 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.189 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.189 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.189 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.189 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.189 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.190 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.190 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.190 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.190 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.190 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.191 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.191 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.191 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.191 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.191 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.192 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.192 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.192 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.192 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.192 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.193 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.193 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.193 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.194 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.194 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.194 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.194 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.195 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.195 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.195 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.195 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.196 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.196 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.196 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.196 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.197 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.197 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.197 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.197 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.197 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.198 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.198 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.198 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.198 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.198 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.199 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.199 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.199 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.199 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.200 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.200 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.200 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.200 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.201 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.201 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.201 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.201 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.201 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.202 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.202 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.202 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.203 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.203 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.203 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.203 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.203 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.204 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.204 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.204 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.204 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.205 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.205 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.205 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.205 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.205 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.206 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.206 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.206 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.206 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.207 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.207 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.207 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.207 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.208 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.208 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.208 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.208 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.208 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.209 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.209 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.209 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.209 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.209 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.210 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.210 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.210 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.210 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.211 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.211 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.211 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.211 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.212 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.212 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.212 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.212 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.213 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.213 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.213 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.213 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.213 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.214 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.214 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.214 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.214 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.214 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.215 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.215 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.215 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.215 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.215 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.216 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.216 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.216 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.216 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.216 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.217 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.217 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.217 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.217 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.218 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.218 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.218 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.218 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.218 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.219 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.219 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.219 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.219 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.219 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.220 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.220 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.220 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.220 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.221 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.221 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.221 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.221 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.221 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.222 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.222 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.222 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.222 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.223 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.223 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.223 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.223 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.223 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.224 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.224 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.224 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.224 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.224 189489 DEBUG oslo_service.service [None req-bd149332-d818-428a-b085-7213d7897bef - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.225 189489 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.247 189489 INFO nova.virt.node [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Determined node identity 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from /var/lib/nova/compute_id#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.247 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.248 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.248 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.248 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.263 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fe227a07cd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.265 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fe227a07cd0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.266 189489 INFO nova.virt.libvirt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Connection event '1' reason 'None'#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.274 189489 INFO nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Libvirt host capabilities <capabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <host>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <uuid>0615934f-a8e3-4c06-8053-42a9c2c49d13</uuid>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <arch>x86_64</arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model>EPYC-Rome-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <vendor>AMD</vendor>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <microcode version='16777317'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <signature family='23' model='49' stepping='0'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <maxphysaddr mode='emulate' bits='40'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='x2apic'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='tsc-deadline'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='osxsave'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='hypervisor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='tsc_adjust'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='spec-ctrl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='stibp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='arch-capabilities'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='cmp_legacy'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='topoext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='virt-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='lbrv'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='tsc-scale'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='vmcb-clean'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='pause-filter'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='pfthreshold'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='svme-addr-chk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='rdctl-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='skip-l1dfl-vmentry'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='mds-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature name='pschange-mc-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <pages unit='KiB' size='4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <pages unit='KiB' size='2048'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <pages unit='KiB' size='1048576'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <power_management>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <suspend_mem/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <suspend_disk/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <suspend_hybrid/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </power_management>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <iommu support='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <migration_features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <live/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <uri_transports>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <uri_transport>tcp</uri_transport>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <uri_transport>rdma</uri_transport>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </uri_transports>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </migration_features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <topology>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <cells num='1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <cell id='0'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          <memory unit='KiB'>7864316</memory>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          <pages unit='KiB' size='4'>1966079</pages>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          <pages unit='KiB' size='2048'>0</pages>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          <pages unit='KiB' size='1048576'>0</pages>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          <distances>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <sibling id='0' value='10'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          </distances>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          <cpus num='8'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:          </cpus>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        </cell>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </cells>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </topology>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <cache>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </cache>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <secmodel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model>selinux</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <doi>0</doi>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </secmodel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <secmodel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model>dac</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <doi>0</doi>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <baselabel type='kvm'>+107:+107</baselabel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <baselabel type='qemu'>+107:+107</baselabel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </secmodel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </host>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <guest>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <os_type>hvm</os_type>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <arch name='i686'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <wordsize>32</wordsize>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <domain type='qemu'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <domain type='kvm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <pae/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <nonpae/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <acpi default='on' toggle='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <apic default='on' toggle='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <cpuselection/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <deviceboot/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <disksnapshot default='on' toggle='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <externalSnapshot/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </guest>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <guest>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <os_type>hvm</os_type>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <arch name='x86_64'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <wordsize>64</wordsize>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <domain type='qemu'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <domain type='kvm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <acpi default='on' toggle='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <apic default='on' toggle='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <cpuselection/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <deviceboot/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <disksnapshot default='on' toggle='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <externalSnapshot/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </guest>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 
Nov 29 10:16:49 np0005539860 nova_compute[189485]: </capabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: #033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.280 189489 DEBUG nova.virt.libvirt.volume.mount [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.286 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.290 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Nov 29 10:16:49 np0005539860 nova_compute[189485]: <domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <domain>kvm</domain>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <arch>i686</arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <vcpu max='4096'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <iothreads supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <os supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='firmware'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <loader supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>rom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pflash</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='readonly'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>yes</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='secure'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </loader>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </os>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='maximumMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <vendor>AMD</vendor>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='succor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='custom' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-128'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-256'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-512'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <memoryBacking supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='sourceType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>anonymous</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>memfd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </memoryBacking>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <disk supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='diskDevice'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>disk</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cdrom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>floppy</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>lun</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>fdc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>sata</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </disk>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <graphics supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vnc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egl-headless</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </graphics>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <video supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='modelType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vga</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cirrus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>none</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>bochs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ramfb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </video>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hostdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='mode'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>subsystem</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='startupPolicy'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>mandatory</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>requisite</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>optional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='subsysType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pci</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='capsType'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='pciBackend'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hostdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <rng supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>random</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </rng>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <filesystem supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='driverType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>path</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>handle</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtiofs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </filesystem>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <tpm supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-tis</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-crb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emulator</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>external</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendVersion'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>2.0</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </tpm>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <redirdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </redirdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <channel supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </channel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <crypto supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </crypto>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <interface supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>passt</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </interface>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <panic supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>isa</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>hyperv</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </panic>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <console supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>null</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dev</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pipe</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stdio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>udp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tcp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu-vdagent</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </console>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <gic supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <genid supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backup supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <async-teardown supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <ps2 supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sev supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sgx supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hyperv supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='features'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>relaxed</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vapic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>spinlocks</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vpindex</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>runtime</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>synic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stimer</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reset</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vendor_id</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>frequencies</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reenlightenment</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tlbflush</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ipi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>avic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emsr_bitmap</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>xmm_input</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hyperv>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <launchSecurity supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='sectype'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tdx</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </launchSecurity>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: </domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.298 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Nov 29 10:16:49 np0005539860 nova_compute[189485]: <domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <domain>kvm</domain>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <arch>i686</arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <vcpu max='240'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <iothreads supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <os supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='firmware'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <loader supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>rom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pflash</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='readonly'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>yes</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='secure'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </loader>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </os>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='maximumMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <vendor>AMD</vendor>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='succor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='custom' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-128'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-256'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-512'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <memoryBacking supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='sourceType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>anonymous</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>memfd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </memoryBacking>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <disk supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='diskDevice'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>disk</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cdrom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>floppy</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>lun</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ide</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>fdc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>sata</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </disk>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <graphics supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vnc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egl-headless</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </graphics>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <video supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='modelType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vga</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cirrus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>none</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>bochs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ramfb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </video>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hostdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='mode'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>subsystem</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='startupPolicy'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>mandatory</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>requisite</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>optional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='subsysType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pci</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='capsType'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='pciBackend'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hostdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <rng supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>random</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </rng>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <filesystem supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='driverType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>path</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>handle</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtiofs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </filesystem>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <tpm supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-tis</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-crb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emulator</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>external</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendVersion'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>2.0</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </tpm>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <redirdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </redirdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <channel supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </channel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <crypto supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </crypto>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <interface supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>passt</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </interface>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <panic supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>isa</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>hyperv</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </panic>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <console supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>null</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dev</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pipe</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stdio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>udp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tcp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu-vdagent</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </console>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <gic supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <genid supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backup supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <async-teardown supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <ps2 supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sev supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sgx supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hyperv supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='features'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>relaxed</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vapic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>spinlocks</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vpindex</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>runtime</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>synic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stimer</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reset</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vendor_id</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>frequencies</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reenlightenment</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tlbflush</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ipi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>avic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emsr_bitmap</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>xmm_input</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hyperv>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <launchSecurity supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='sectype'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tdx</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </launchSecurity>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: </domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.321 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.325 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Nov 29 10:16:49 np0005539860 nova_compute[189485]: <domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <domain>kvm</domain>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <machine>pc-q35-rhel9.8.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <arch>x86_64</arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <vcpu max='4096'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <iothreads supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <os supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='firmware'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>efi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <loader supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>rom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pflash</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='readonly'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>yes</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='secure'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>yes</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </loader>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </os>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='maximumMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <vendor>AMD</vendor>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='succor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='custom' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-128'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-256'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-512'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <memoryBacking supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='sourceType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>anonymous</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>memfd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </memoryBacking>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <disk supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='diskDevice'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>disk</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cdrom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>floppy</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>lun</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>fdc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>sata</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </disk>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <graphics supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vnc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egl-headless</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </graphics>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <video supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='modelType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vga</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cirrus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>none</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>bochs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ramfb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </video>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hostdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='mode'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>subsystem</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='startupPolicy'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>mandatory</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>requisite</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>optional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='subsysType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pci</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='capsType'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='pciBackend'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hostdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <rng supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>random</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </rng>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <filesystem supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='driverType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>path</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>handle</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtiofs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </filesystem>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <tpm supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-tis</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-crb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emulator</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>external</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendVersion'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>2.0</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </tpm>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <redirdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </redirdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <channel supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </channel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <crypto supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </crypto>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <interface supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>passt</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </interface>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <panic supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>isa</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>hyperv</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </panic>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <console supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>null</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dev</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pipe</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stdio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>udp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tcp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu-vdagent</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </console>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <gic supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <genid supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backup supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <async-teardown supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <ps2 supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sev supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sgx supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hyperv supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='features'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>relaxed</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vapic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>spinlocks</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vpindex</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>runtime</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>synic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stimer</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reset</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vendor_id</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>frequencies</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reenlightenment</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tlbflush</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ipi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>avic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emsr_bitmap</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>xmm_input</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hyperv>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <launchSecurity supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='sectype'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tdx</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </launchSecurity>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: </domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.381 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Nov 29 10:16:49 np0005539860 nova_compute[189485]: <domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <path>/usr/libexec/qemu-kvm</path>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <domain>kvm</domain>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <machine>pc-i440fx-rhel7.6.0</machine>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <arch>x86_64</arch>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <vcpu max='240'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <iothreads supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <os supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='firmware'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <loader supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>rom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pflash</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='readonly'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>yes</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='secure'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>no</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </loader>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </os>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-passthrough' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='hostPassthroughMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='maximum' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='maximumMigratable'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>on</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>off</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='host-model' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model fallback='forbid'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <vendor>AMD</vendor>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <maxphysaddr mode='passthrough' limit='40'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='x2apic'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-deadline'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='hypervisor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc_adjust'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='spec-ctrl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='stibp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='cmp_legacy'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='overflow-recov'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='succor'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='amd-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='virt-ssbd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lbrv'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='tsc-scale'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='vmcb-clean'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='flushbyasid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pause-filter'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='pfthreshold'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='svme-addr-chk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='require' name='lfence-always-serializing'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <feature policy='disable' name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <mode name='custom' supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Broadwell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cascadelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Cooperlake-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Denverton-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Dhyana-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Genoa-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='auto-ibrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Milan-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amd-psfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='no-nested-data-bp'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='null-sel-clr-base'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='stibp-always-on'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-Rome-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='EPYC-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='GraniteRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-128'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-256'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx10-512'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='prefetchiti'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Haswell-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-noTSX'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v6'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Icelake-Server-v7'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='IvyBridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='KnightsMill-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4fmaps'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-4vnniw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512er'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512pf'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G4-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Opteron_G5-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fma4'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tbm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xop'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SapphireRapids-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='amx-tile'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-bf16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-fp16'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512-vpopcntdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bitalg'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vbmi2'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrc'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fzrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='la57'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='taa-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='tsx-ldtrk'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xfd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='SierraForest-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ifma'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-ne-convert'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx-vnni-int8'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='bus-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cmpccxadd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fbsdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='fsrs'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ibrs-all'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mcdt-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pbrsb-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='psdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='sbdr-ssdp-no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='serialize'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vaes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='vpclmulqdq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Client-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='hle'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='rtm'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Skylake-Server-v5'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512bw'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512cd'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512dq'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512f'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='avx512vl'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='invpcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pcid'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='pku'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='mpx'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v2'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v3'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='core-capability'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='split-lock-detect'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='Snowridge-v4'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='cldemote'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='erms'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='gfni'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdir64b'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='movdiri'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='xsaves'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='athlon-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='core2duo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='coreduo-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='n270-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='ss'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <blockers model='phenom-v1'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnow'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <feature name='3dnowext'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </blockers>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </mode>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </cpu>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <memoryBacking supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <enum name='sourceType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>anonymous</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <value>memfd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </memoryBacking>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <disk supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='diskDevice'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>disk</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cdrom</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>floppy</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>lun</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ide</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>fdc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>sata</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </disk>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <graphics supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vnc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egl-headless</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </graphics>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <video supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='modelType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vga</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>cirrus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>none</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>bochs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ramfb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </video>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hostdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='mode'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>subsystem</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='startupPolicy'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>mandatory</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>requisite</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>optional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='subsysType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pci</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>scsi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='capsType'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='pciBackend'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hostdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <rng supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtio-non-transitional</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>random</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>egd</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </rng>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <filesystem supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='driverType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>path</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>handle</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>virtiofs</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </filesystem>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <tpm supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-tis</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tpm-crb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emulator</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>external</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendVersion'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>2.0</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </tpm>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <redirdev supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='bus'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>usb</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </redirdev>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <channel supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </channel>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <crypto supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendModel'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>builtin</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </crypto>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <interface supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='backendType'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>default</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>passt</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </interface>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <panic supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='model'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>isa</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>hyperv</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </panic>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <console supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='type'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>null</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vc</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pty</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dev</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>file</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>pipe</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stdio</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>udp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tcp</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>unix</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>qemu-vdagent</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>dbus</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </console>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </devices>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  <features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <gic supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <vmcoreinfo supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <genid supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backingStoreInput supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <backup supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <async-teardown supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <ps2 supported='yes'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sev supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <sgx supported='no'/>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <hyperv supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='features'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>relaxed</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vapic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>spinlocks</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vpindex</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>runtime</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>synic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>stimer</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reset</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>vendor_id</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>frequencies</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>reenlightenment</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tlbflush</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>ipi</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>avic</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>emsr_bitmap</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>xmm_input</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <spinlocks>4095</spinlocks>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <stimer_direct>on</stimer_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_direct>on</tlbflush_direct>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <tlbflush_extended>on</tlbflush_extended>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <vendor_id>Linux KVM Hv</vendor_id>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </defaults>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </hyperv>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    <launchSecurity supported='yes'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      <enum name='sectype'>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:        <value>tdx</value>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:      </enum>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:    </launchSecurity>
Nov 29 10:16:49 np0005539860 nova_compute[189485]:  </features>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: </domainCapabilities>
Nov 29 10:16:49 np0005539860 nova_compute[189485]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.439 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.440 189489 INFO nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Secure Boot support detected#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.442 189489 INFO nova.virt.libvirt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.442 189489 INFO nova.virt.libvirt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.448 189489 DEBUG nova.virt.libvirt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.487 189489 INFO nova.virt.node [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Determined node identity 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from /var/lib/nova/compute_id#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.511 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Verified node 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd matches my host compute-0.ctlplane.example.com _check_for_host_rename /usr/lib/python3.9/site-packages/nova/compute/manager.py:1568#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.555 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Nov 29 10:16:49 np0005539860 nova_compute[189485]: 2025-11-29 15:16:49.996 189489 ERROR nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Could not retrieve compute node resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd and therefore unable to error out any instances stuck in BUILDING state. Error: Failed to retrieve allocations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd' not found: No resource provider with uuid 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd found  ", "request_id": "req-5e923015-361e-4e7f-8bc4-864d0ef8be59"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd' not found: No resource provider with uuid 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd found  ", "request_id": "req-5e923015-361e-4e7f-8bc4-864d0ef8be59"}]}#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.032 189489 DEBUG oslo_concurrency.lockutils [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.032 189489 DEBUG oslo_concurrency.lockutils [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.033 189489 DEBUG oslo_concurrency.lockutils [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.033 189489 DEBUG nova.compute.resource_tracker [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.232 189489 WARNING nova.virt.libvirt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.233 189489 DEBUG nova.compute.resource_tracker [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6073MB free_disk=72.60951232910156GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.233 189489 DEBUG oslo_concurrency.lockutils [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.233 189489 DEBUG oslo_concurrency.lockutils [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.408 189489 ERROR nova.compute.resource_tracker [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Skipping removal of allocations for deleted instances: Failed to retrieve allocations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd' not found: No resource provider with uuid 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd found  ", "request_id": "req-aae99981-24ab-4818-b185-2cc4955a93c5"}]}: nova.exception.ResourceProviderAllocationRetrievalFailed: Failed to retrieve allocations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd: {"errors": [{"status": 404, "title": "Not Found", "detail": "The resource could not be found.\n\n Resource provider '4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd' not found: No resource provider with uuid 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd found  ", "request_id": "req-aae99981-24ab-4818-b185-2cc4955a93c5"}]}#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.409 189489 DEBUG nova.compute.resource_tracker [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.409 189489 DEBUG nova.compute.resource_tracker [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.849 189489 INFO nova.scheduler.client.report [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [req-dfb97e94-f9ab-41a6-b073-d5dd25495472] Created resource provider record via placement API for resource provider with UUID 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd and name compute-0.ctlplane.example.com.#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.898 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Nov 29 10:16:50 np0005539860 nova_compute[189485]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.899 189489 INFO nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] kernel doesn't support AMD SEV#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.900 189489 DEBUG nova.compute.provider_tree [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.901 189489 DEBUG nova.virt.libvirt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.981 189489 DEBUG nova.scheduler.client.report [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Updated inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.982 189489 DEBUG nova.compute.provider_tree [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Updating resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 10:16:50 np0005539860 nova_compute[189485]: 2025-11-29 15:16:50.982 189489 DEBUG nova.compute.provider_tree [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 10:16:51 np0005539860 nova_compute[189485]: 2025-11-29 15:16:51.100 189489 DEBUG nova.compute.provider_tree [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Updating resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 10:16:51 np0005539860 nova_compute[189485]: 2025-11-29 15:16:51.127 189489 DEBUG nova.compute.resource_tracker [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 10:16:51 np0005539860 nova_compute[189485]: 2025-11-29 15:16:51.128 189489 DEBUG oslo_concurrency.lockutils [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.895s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:16:51 np0005539860 nova_compute[189485]: 2025-11-29 15:16:51.128 189489 DEBUG nova.service [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Nov 29 10:16:51 np0005539860 nova_compute[189485]: 2025-11-29 15:16:51.217 189489 DEBUG nova.service [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Nov 29 10:16:51 np0005539860 nova_compute[189485]: 2025-11-29 15:16:51.218 189489 DEBUG nova.servicegroup.drivers.db [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Nov 29 10:16:53 np0005539860 systemd-logind[794]: New session 25 of user zuul.
Nov 29 10:16:53 np0005539860 systemd[1]: Started Session 25 of User zuul.
Nov 29 10:16:54 np0005539860 python3.9[189939]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:16:56 np0005539860 python3.9[190095]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:16:56 np0005539860 systemd[1]: Reloading.
Nov 29 10:16:56 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:16:56 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:16:57 np0005539860 python3.9[190279]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:16:57 np0005539860 network[190296]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:16:57 np0005539860 network[190297]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:16:57 np0005539860 network[190298]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:16:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:16:59.139 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:16:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:16:59.142 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:16:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:16:59.142 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:17:03 np0005539860 nova_compute[189485]: 2025-11-29 15:17:03.221 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:03 np0005539860 nova_compute[189485]: 2025-11-29 15:17:03.243 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:03 np0005539860 python3.9[190572]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:17:03 np0005539860 podman[190574]: 2025-11-29 15:17:03.422903819 +0000 UTC m=+0.105903712 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible)
Nov 29 10:17:04 np0005539860 podman[190723]: 2025-11-29 15:17:04.120479694 +0000 UTC m=+0.068983197 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 10:17:04 np0005539860 python3.9[190769]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:04 np0005539860 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 10:17:04 np0005539860 python3.9[190924]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:05 np0005539860 python3.9[191076]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:17:06 np0005539860 python3.9[191228]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:17:07 np0005539860 podman[191352]: 2025-11-29 15:17:07.373517444 +0000 UTC m=+0.054884209 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:17:07 np0005539860 python3.9[191400]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:17:07 np0005539860 systemd[1]: Reloading.
Nov 29 10:17:07 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:17:07 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:17:08 np0005539860 python3.9[191587]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:17:09 np0005539860 python3.9[191740]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:17:10 np0005539860 python3.9[191890]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:17:11 np0005539860 python3.9[192042]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:11 np0005539860 python3.9[192165]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429430.7140598-133-91749957157625/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:17:12 np0005539860 python3.9[192317]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Nov 29 10:17:13 np0005539860 python3.9[192469]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 29 10:17:14 np0005539860 python3.9[192622]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Nov 29 10:17:15 np0005539860 python3.9[192780]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Nov 29 10:17:17 np0005539860 python3.9[192938]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:18 np0005539860 python3.9[193059]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429436.8067129-201-174089363205591/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:18 np0005539860 python3.9[193209]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:19 np0005539860 python3.9[193330]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429438.3034163-201-151850789159561/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:20 np0005539860 python3.9[193480]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:20 np0005539860 python3.9[193601]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429439.6228194-201-266031333035720/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:21 np0005539860 python3.9[193751]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:17:22 np0005539860 python3.9[193903]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:17:23 np0005539860 python3.9[194055]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:23 np0005539860 python3.9[194176]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429442.7253501-260-131220594585920/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:24 np0005539860 python3.9[194326]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:25 np0005539860 python3.9[194402]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:25 np0005539860 python3.9[194552]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:26 np0005539860 python3.9[194673]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429445.315872-260-84098116892172/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:27 np0005539860 python3.9[194823]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:27 np0005539860 python3.9[194944]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429446.7742033-260-180016250351179/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:28 np0005539860 python3.9[195094]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:29 np0005539860 python3.9[195215]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429448.0994668-260-53803960400760/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:29 np0005539860 python3.9[195365]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:30 np0005539860 python3.9[195486]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429449.356323-260-30192123569077/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:31 np0005539860 python3.9[195636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:31 np0005539860 python3.9[195757]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429450.5739622-260-262174330964984/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:32 np0005539860 python3.9[195907]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:32 np0005539860 python3.9[196028]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429451.831651-260-204150131682212/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:33 np0005539860 python3.9[196178]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:33 np0005539860 podman[196179]: 2025-11-29 15:17:33.702360169 +0000 UTC m=+0.148007558 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 10:17:34 np0005539860 python3.9[196326]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429453.1259632-260-74793554672460/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:34 np0005539860 podman[196327]: 2025-11-29 15:17:34.370031533 +0000 UTC m=+0.055886225 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 10:17:35 np0005539860 python3.9[196493]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:35 np0005539860 python3.9[196614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429454.6358008-260-18979572280144/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:36 np0005539860 python3.9[196764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:37 np0005539860 python3.9[196885]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429456.0736864-260-14772444415098/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:37 np0005539860 podman[196910]: 2025-11-29 15:17:37.635518527 +0000 UTC m=+0.077996287 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 10:17:38 np0005539860 python3.9[197056]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:38 np0005539860 python3.9[197132]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:39 np0005539860 python3.9[197282]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:40 np0005539860 python3.9[197358]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:40 np0005539860 python3.9[197508]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:41 np0005539860 python3.9[197584]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:42 np0005539860 python3.9[197736]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:43 np0005539860 python3.9[197888]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:43 np0005539860 python3.9[198040]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:17:44 np0005539860 python3.9[198192]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:17:44 np0005539860 systemd[1]: Reloading.
Nov 29 10:17:44 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:17:44 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:17:45 np0005539860 systemd[1]: Listening on Podman API Socket.
Nov 29 10:17:45 np0005539860 python3.9[198384]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:46 np0005539860 python3.9[198507]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429465.5229177-482-131528788462159/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:17:47 np0005539860 python3.9[198583]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:17:47 np0005539860 python3.9[198706]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429465.5229177-482-131528788462159/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.487 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.487 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.515 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.516 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.517 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.517 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.517 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.518 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.518 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.519 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.519 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.559 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.560 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.560 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.560 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.760 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.762 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6015MB free_disk=72.60929107666016GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.762 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.763 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:17:48 np0005539860 python3.9[198858]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.842 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.843 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.882 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.910 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.913 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 10:17:48 np0005539860 nova_compute[189485]: 2025-11-29 15:17:48.913 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:17:49 np0005539860 python3.9[199010]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:17:51 np0005539860 python3[199162]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:17:51 np0005539860 podman[199201]: 2025-11-29 15:17:51.620058299 +0000 UTC m=+0.067502886 container create 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true)
Nov 29 10:17:51 np0005539860 podman[199201]: 2025-11-29 15:17:51.579475464 +0000 UTC m=+0.026920081 image pull 4c40094793b487edb878e6f339e5974acc471f14f5a7d3266faecb44581a8770 quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Nov 29 10:17:51 np0005539860 python3[199162]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Nov 29 10:17:52 np0005539860 python3.9[199390]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:17:53 np0005539860 python3.9[199544]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:54 np0005539860 python3.9[199695]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429473.8197858-546-122426811812998/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:17:55 np0005539860 python3.9[199771]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:17:55 np0005539860 systemd[1]: Reloading.
Nov 29 10:17:55 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:17:55 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:17:56 np0005539860 python3.9[199882]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:17:57 np0005539860 systemd[1]: Reloading.
Nov 29 10:17:57 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:17:57 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:17:57 np0005539860 auditd[702]: Audit daemon rotating log files
Nov 29 10:17:58 np0005539860 systemd[1]: Starting ceilometer_agent_compute container...
Nov 29 10:17:58 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:17:58 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:58 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:58 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:58 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:58 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.
Nov 29 10:17:58 np0005539860 podman[199922]: 2025-11-29 15:17:58.187734946 +0000 UTC m=+0.134778345 container init 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + sudo -E kolla_set_configs
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: sudo: unable to send audit message: Operation not permitted
Nov 29 10:17:58 np0005539860 podman[199922]: 2025-11-29 15:17:58.220955854 +0000 UTC m=+0.167999243 container start 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 10:17:58 np0005539860 podman[199922]: ceilometer_agent_compute
Nov 29 10:17:58 np0005539860 systemd[1]: Started ceilometer_agent_compute container.
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Validating config file
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Copying service configuration files
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: INFO:__main__:Writing out command to execute
Nov 29 10:17:58 np0005539860 podman[199944]: 2025-11-29 15:17:58.292332763 +0000 UTC m=+0.059517473 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: ++ cat /run_command
Nov 29 10:17:58 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-333b5921a4b5771a.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:17:58 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-333b5921a4b5771a.service: Failed with result 'exit-code'.
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + ARGS=
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + sudo kolla_copy_cacerts
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: sudo: unable to send audit message: Operation not permitted
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + [[ ! -n '' ]]
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + . kolla_extend_start
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + umask 0022
Nov 29 10:17:58 np0005539860 ceilometer_agent_compute[199937]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.063 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.064 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.065 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.066 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.067 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.068 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.069 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.070 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.071 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.072 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.073 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.074 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.075 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.098 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.099 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.100 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.100 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.100 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.100 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.100 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.101 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.101 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.101 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.101 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.101 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.101 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.102 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.102 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.102 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.102 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.102 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.103 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.104 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.105 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.106 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.107 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.108 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.109 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.110 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.111 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.112 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.113 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.114 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.115 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.116 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.117 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.118 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.119 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.119 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.119 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.119 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.119 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 29 10:17:59 np0005539860 python3.9[200119]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.119 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.122 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.125 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.125 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 29 10:17:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:17:59.140 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:17:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:17:59.142 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:17:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:17:59.143 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:17:59 np0005539860 systemd[1]: Stopping ceilometer_agent_compute container...
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.265 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.339 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.348 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.348 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.348 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.366 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.366 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.366 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.474 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.475 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.476 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.477 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.478 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.479 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.480 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.481 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.482 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.483 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.484 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.485 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.486 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.487 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.488 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[199937]: 2025-11-29 15:17:59.496 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Nov 29 10:17:59 np0005539860 virtqemud[189062]: End of file while reading data: Input/output error
Nov 29 10:17:59 np0005539860 systemd[1]: libpod-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope: Deactivated successfully.
Nov 29 10:17:59 np0005539860 systemd[1]: libpod-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope: Consumed 1.473s CPU time.
Nov 29 10:17:59 np0005539860 podman[200131]: 2025-11-29 15:17:59.663153226 +0000 UTC m=+0.453025882 container died 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 10:17:59 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-333b5921a4b5771a.timer: Deactivated successfully.
Nov 29 10:17:59 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.
Nov 29 10:17:59 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-userdata-shm.mount: Deactivated successfully.
Nov 29 10:17:59 np0005539860 systemd[1]: var-lib-containers-storage-overlay-88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872-merged.mount: Deactivated successfully.
Nov 29 10:17:59 np0005539860 podman[200131]: 2025-11-29 15:17:59.72761744 +0000 UTC m=+0.517490066 container cleanup 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 10:17:59 np0005539860 podman[200131]: ceilometer_agent_compute
Nov 29 10:17:59 np0005539860 podman[200162]: ceilometer_agent_compute
Nov 29 10:17:59 np0005539860 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Nov 29 10:17:59 np0005539860 systemd[1]: Stopped ceilometer_agent_compute container.
Nov 29 10:17:59 np0005539860 systemd[1]: Starting ceilometer_agent_compute container...
Nov 29 10:17:59 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:17:59 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:59 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:59 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:59 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88d397b8c8f6959dd3fd3fd570bb49ff423f3384d7916893fc2e656b971f5872/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 29 10:17:59 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.
Nov 29 10:17:59 np0005539860 podman[200175]: 2025-11-29 15:17:59.955741449 +0000 UTC m=+0.141792792 container init 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[200190]: + sudo -E kolla_set_configs
Nov 29 10:17:59 np0005539860 ceilometer_agent_compute[200190]: sudo: unable to send audit message: Operation not permitted
Nov 29 10:17:59 np0005539860 podman[200175]: 2025-11-29 15:17:59.986014915 +0000 UTC m=+0.172066268 container start 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 29 10:17:59 np0005539860 podman[200175]: ceilometer_agent_compute
Nov 29 10:17:59 np0005539860 systemd[1]: Started ceilometer_agent_compute container.
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Validating config file
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Copying service configuration files
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: INFO:__main__:Writing out command to execute
Nov 29 10:18:00 np0005539860 podman[200199]: 2025-11-29 15:18:00.070058226 +0000 UTC m=+0.067869695 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute)
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: ++ cat /run_command
Nov 29 10:18:00 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-72e4370917cae21e.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:18:00 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-72e4370917cae21e.service: Failed with result 'exit-code'.
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + ARGS=
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + sudo kolla_copy_cacerts
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: sudo: unable to send audit message: Operation not permitted
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + [[ ! -n '' ]]
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + . kolla_extend_start
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + umask 0022
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.839 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.839 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.839 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.839 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.840 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.841 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.842 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.843 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.844 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.845 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.846 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.847 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.848 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.849 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.850 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.851 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.851 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.851 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.851 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 29 10:18:00 np0005539860 python3.9[200376]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.872 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.873 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.873 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.874 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.874 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.874 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.874 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.874 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.874 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.875 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.875 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.875 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.875 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.875 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.875 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.876 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.876 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.876 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.876 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.876 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.876 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.877 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.878 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.879 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.880 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.881 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.882 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.883 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.884 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.885 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.886 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.886 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.886 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.886 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.886 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.886 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.887 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.888 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.889 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.890 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.891 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.892 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.893 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.894 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.894 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.896 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.896 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.898 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.899 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.905 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.906 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 29 10:18:00 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:00.906 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.019 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.019 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.019 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.019 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.020 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.021 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.022 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.023 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.024 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.025 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.026 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.027 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.028 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.029 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.030 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.031 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.032 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.032 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.032 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.034 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.045 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.045 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.045 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.046 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:18:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:18:01 np0005539860 python3.9[200512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429480.2713304-578-197691572830929/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:18:02 np0005539860 python3.9[200664]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Nov 29 10:18:03 np0005539860 python3.9[200816]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:18:04 np0005539860 podman[200940]: 2025-11-29 15:18:04.164366587 +0000 UTC m=+0.124938845 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 10:18:04 np0005539860 python3[200989]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:18:04 np0005539860 podman[201031]: 2025-11-29 15:18:04.603190911 +0000 UTC m=+0.067053961 container create e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 10:18:04 np0005539860 podman[201031]: 2025-11-29 15:18:04.564596315 +0000 UTC m=+0.028459395 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Nov 29 10:18:04 np0005539860 podman[201032]: 2025-11-29 15:18:04.608153484 +0000 UTC m=+0.058402470 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:18:04 np0005539860 python3[200989]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Nov 29 10:18:05 np0005539860 python3.9[201240]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:18:06 np0005539860 python3.9[201394]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:07 np0005539860 python3.9[201545]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429486.5916348-631-268136160644497/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:07 np0005539860 python3.9[201621]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:18:07 np0005539860 systemd[1]: Reloading.
Nov 29 10:18:07 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:18:07 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:18:07 np0005539860 podman[201623]: 2025-11-29 15:18:07.938387436 +0000 UTC m=+0.111262719 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 10:18:08 np0005539860 python3.9[201751]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:18:08 np0005539860 systemd[1]: Reloading.
Nov 29 10:18:08 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:18:08 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:18:09 np0005539860 systemd[1]: Starting node_exporter container...
Nov 29 10:18:09 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:18:09 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2407bb7f032ee8318549daab363029b8f549c8859ba2fbd5a112d21a197e3d43/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:09 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2407bb7f032ee8318549daab363029b8f549c8859ba2fbd5a112d21a197e3d43/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:09 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.
Nov 29 10:18:09 np0005539860 podman[201791]: 2025-11-29 15:18:09.264031332 +0000 UTC m=+0.124431360 container init e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.276Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.277Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.277Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.277Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.277Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.277Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=arp
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=bcache
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=bonding
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=cpu
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=edac
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=filefd
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=netclass
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=netdev
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=netstat
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=nfs
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=nvme
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=softnet
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=systemd
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=xfs
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=node_exporter.go:117 level=info collector=zfs
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.278Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 29 10:18:09 np0005539860 node_exporter[201806]: ts=2025-11-29T15:18:09.279Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 29 10:18:09 np0005539860 podman[201791]: 2025-11-29 15:18:09.29022759 +0000 UTC m=+0.150627598 container start e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 10:18:09 np0005539860 podman[201791]: node_exporter
Nov 29 10:18:09 np0005539860 systemd[1]: Started node_exporter container.
Nov 29 10:18:09 np0005539860 podman[201815]: 2025-11-29 15:18:09.359847483 +0000 UTC m=+0.060805859 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 10:18:10 np0005539860 python3.9[201990]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:18:10 np0005539860 systemd[1]: Stopping node_exporter container...
Nov 29 10:18:10 np0005539860 systemd[1]: libpod-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope: Deactivated successfully.
Nov 29 10:18:10 np0005539860 podman[201994]: 2025-11-29 15:18:10.277261181 +0000 UTC m=+0.072922530 container died e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:10 np0005539860 systemd[1]: e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22-29e13f6f5d3e4508.timer: Deactivated successfully.
Nov 29 10:18:10 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.
Nov 29 10:18:10 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22-userdata-shm.mount: Deactivated successfully.
Nov 29 10:18:10 np0005539860 systemd[1]: var-lib-containers-storage-overlay-2407bb7f032ee8318549daab363029b8f549c8859ba2fbd5a112d21a197e3d43-merged.mount: Deactivated successfully.
Nov 29 10:18:10 np0005539860 podman[201994]: 2025-11-29 15:18:10.319169704 +0000 UTC m=+0.114831053 container cleanup e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 10:18:10 np0005539860 podman[201994]: node_exporter
Nov 29 10:18:10 np0005539860 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 29 10:18:10 np0005539860 podman[202024]: node_exporter
Nov 29 10:18:10 np0005539860 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Nov 29 10:18:10 np0005539860 systemd[1]: Stopped node_exporter container.
Nov 29 10:18:10 np0005539860 systemd[1]: Starting node_exporter container...
Nov 29 10:18:10 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:18:10 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2407bb7f032ee8318549daab363029b8f549c8859ba2fbd5a112d21a197e3d43/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:10 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2407bb7f032ee8318549daab363029b8f549c8859ba2fbd5a112d21a197e3d43/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:10 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.
Nov 29 10:18:10 np0005539860 podman[202037]: 2025-11-29 15:18:10.519174538 +0000 UTC m=+0.119463576 container init e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.530Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=arp
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=bcache
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=bonding
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=btrfs
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=conntrack
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=cpu
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=cpufreq
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=diskstats
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=edac
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=fibrechannel
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=filefd
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=filesystem
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=infiniband
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=ipvs
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=loadavg
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=mdadm
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=meminfo
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=netclass
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=netdev
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=netstat
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=nfs
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=nfsd
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=nvme
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=schedstat
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=sockstat
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=softnet
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=systemd
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=tapestats
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=udp_queues
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=vmstat
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=xfs
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.531Z caller=node_exporter.go:117 level=info collector=zfs
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.532Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Nov 29 10:18:10 np0005539860 node_exporter[202053]: ts=2025-11-29T15:18:10.532Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Nov 29 10:18:10 np0005539860 podman[202037]: 2025-11-29 15:18:10.543828182 +0000 UTC m=+0.144117150 container start e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:10 np0005539860 podman[202037]: node_exporter
Nov 29 10:18:10 np0005539860 systemd[1]: Started node_exporter container.
Nov 29 10:18:10 np0005539860 podman[202062]: 2025-11-29 15:18:10.635126982 +0000 UTC m=+0.083675420 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 10:18:11 np0005539860 python3.9[202237]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:18:11 np0005539860 python3.9[202360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429490.8465712-663-10541521906905/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:18:12 np0005539860 python3.9[202512]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Nov 29 10:18:13 np0005539860 python3.9[202664]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:18:14 np0005539860 python3[202816]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:18:16 np0005539860 podman[202830]: 2025-11-29 15:18:16.347407607 +0000 UTC m=+1.371097461 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 29 10:18:16 np0005539860 podman[202928]: 2025-11-29 15:18:16.465869724 +0000 UTC m=+0.040194924 container create 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 10:18:16 np0005539860 podman[202928]: 2025-11-29 15:18:16.445063392 +0000 UTC m=+0.019388602 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Nov 29 10:18:16 np0005539860 python3[202816]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Nov 29 10:18:17 np0005539860 python3.9[203118]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:18:18 np0005539860 python3.9[203272]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:18 np0005539860 python3.9[203423]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429498.1323988-716-3721664950229/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:19 np0005539860 python3.9[203499]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:18:19 np0005539860 systemd[1]: Reloading.
Nov 29 10:18:19 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:18:19 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:18:20 np0005539860 python3.9[203611]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:18:20 np0005539860 systemd[1]: Reloading.
Nov 29 10:18:20 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:18:20 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:18:21 np0005539860 systemd[1]: Starting podman_exporter container...
Nov 29 10:18:21 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:18:21 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86e0ce864f0f2af5888617227711154b92af3bf9edd36e3bf3da3b775c9e4c2/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:21 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86e0ce864f0f2af5888617227711154b92af3bf9edd36e3bf3da3b775c9e4c2/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:21 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.
Nov 29 10:18:21 np0005539860 podman[203650]: 2025-11-29 15:18:21.231424203 +0000 UTC m=+0.159802913 container init 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.257Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.258Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.258Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.258Z caller=handler.go:105 level=info collector=container
Nov 29 10:18:21 np0005539860 podman[203650]: 2025-11-29 15:18:21.262338007 +0000 UTC m=+0.190716707 container start 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:18:21 np0005539860 podman[203650]: podman_exporter
Nov 29 10:18:21 np0005539860 systemd[1]: Starting Podman API Service...
Nov 29 10:18:21 np0005539860 systemd[1]: Started Podman API Service.
Nov 29 10:18:21 np0005539860 systemd[1]: Started podman_exporter container.
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="/usr/bin/podman filtering at log level info"
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="Setting parallel job count to 25"
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="Using sqlite as database backend"
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="Using systemd socket activation to determine API endpoint"
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Nov 29 10:18:21 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:18:21 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 29 10:18:21 np0005539860 podman[203677]: time="2025-11-29T15:18:21Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:18:21 np0005539860 podman[203674]: 2025-11-29 15:18:21.359067405 +0000 UTC m=+0.083450354 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:18:21 np0005539860 systemd[1]: 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7-41ce650e8983a208.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:18:21 np0005539860 systemd[1]: 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7-41ce650e8983a208.service: Failed with result 'exit-code'.
Nov 29 10:18:21 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:18:21 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.375Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.376Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 29 10:18:21 np0005539860 podman_exporter[203665]: ts=2025-11-29T15:18:21.377Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 29 10:18:22 np0005539860 python3.9[203863]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:18:22 np0005539860 systemd[1]: Stopping podman_exporter container...
Nov 29 10:18:22 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:18:21 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Nov 29 10:18:22 np0005539860 systemd[1]: libpod-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope: Deactivated successfully.
Nov 29 10:18:22 np0005539860 podman[203867]: 2025-11-29 15:18:22.368132514 +0000 UTC m=+0.057100703 container died 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:18:22 np0005539860 systemd[1]: 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7-41ce650e8983a208.timer: Deactivated successfully.
Nov 29 10:18:22 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.
Nov 29 10:18:22 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7-userdata-shm.mount: Deactivated successfully.
Nov 29 10:18:22 np0005539860 systemd[1]: var-lib-containers-storage-overlay-a86e0ce864f0f2af5888617227711154b92af3bf9edd36e3bf3da3b775c9e4c2-merged.mount: Deactivated successfully.
Nov 29 10:18:22 np0005539860 podman[203867]: 2025-11-29 15:18:22.757165638 +0000 UTC m=+0.446133827 container cleanup 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:18:22 np0005539860 podman[203867]: podman_exporter
Nov 29 10:18:22 np0005539860 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 29 10:18:22 np0005539860 podman[203895]: podman_exporter
Nov 29 10:18:22 np0005539860 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Nov 29 10:18:22 np0005539860 systemd[1]: Stopped podman_exporter container.
Nov 29 10:18:22 np0005539860 systemd[1]: Starting podman_exporter container...
Nov 29 10:18:22 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:18:22 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86e0ce864f0f2af5888617227711154b92af3bf9edd36e3bf3da3b775c9e4c2/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:22 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a86e0ce864f0f2af5888617227711154b92af3bf9edd36e3bf3da3b775c9e4c2/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:23 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.
Nov 29 10:18:23 np0005539860 podman[203908]: 2025-11-29 15:18:23.014039188 +0000 UTC m=+0.137541700 container init 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.036Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.037Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.038Z caller=handler.go:94 level=info msg="enabled collectors"
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.038Z caller=handler.go:105 level=info collector=container
Nov 29 10:18:23 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:18:23 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Nov 29 10:18:23 np0005539860 podman[203677]: time="2025-11-29T15:18:23Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:18:23 np0005539860 podman[203908]: 2025-11-29 15:18:23.055319342 +0000 UTC m=+0.178821774 container start 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 10:18:23 np0005539860 podman[203908]: podman_exporter
Nov 29 10:18:23 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:18:23 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19589 "" "Go-http-client/1.1"
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.066Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.066Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Nov 29 10:18:23 np0005539860 systemd[1]: Started podman_exporter container.
Nov 29 10:18:23 np0005539860 podman_exporter[203924]: ts=2025-11-29T15:18:23.067Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Nov 29 10:18:23 np0005539860 podman[203933]: 2025-11-29 15:18:23.15656288 +0000 UTC m=+0.084639418 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:18:23 np0005539860 python3.9[204109]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:18:24 np0005539860 python3.9[204232]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429503.3485076-748-224883199650360/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:18:25 np0005539860 python3.9[204384]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Nov 29 10:18:26 np0005539860 python3.9[204536]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:18:27 np0005539860 python3[204688]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:18:29 np0005539860 podman[204699]: 2025-11-29 15:18:29.837400833 +0000 UTC m=+2.378821522 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 29 10:18:29 np0005539860 podman[204797]: 2025-11-29 15:18:29.978212556 +0000 UTC m=+0.050274846 container create e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 10:18:29 np0005539860 podman[204797]: 2025-11-29 15:18:29.950714981 +0000 UTC m=+0.022777311 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 29 10:18:29 np0005539860 python3[204688]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Nov 29 10:18:30 np0005539860 podman[204936]: 2025-11-29 15:18:30.622622776 +0000 UTC m=+0.068556614 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, container_name=ceilometer_agent_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 10:18:30 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-72e4370917cae21e.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:18:30 np0005539860 systemd[1]: 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1-72e4370917cae21e.service: Failed with result 'exit-code'.
Nov 29 10:18:30 np0005539860 python3.9[205008]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:18:31 np0005539860 python3.9[205162]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:32 np0005539860 python3.9[205313]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429511.9182343-801-93657396889720/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:33 np0005539860 python3.9[205389]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:18:33 np0005539860 systemd[1]: Reloading.
Nov 29 10:18:33 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:18:33 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:18:34 np0005539860 python3.9[205499]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:18:34 np0005539860 systemd[1]: Reloading.
Nov 29 10:18:34 np0005539860 podman[205501]: 2025-11-29 15:18:34.360820807 +0000 UTC m=+0.103044371 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 10:18:34 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:18:34 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:18:34 np0005539860 systemd[1]: Starting openstack_network_exporter container...
Nov 29 10:18:34 np0005539860 podman[205565]: 2025-11-29 15:18:34.726716992 +0000 UTC m=+0.081779977 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent)
Nov 29 10:18:34 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:18:34 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:34 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:34 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:34 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.
Nov 29 10:18:34 np0005539860 podman[205564]: 2025-11-29 15:18:34.771672292 +0000 UTC m=+0.125835281 container init e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *bridge.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *coverage.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *datapath.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *iface.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *memory.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *ovnnorthd.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *ovn.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *ovsdbserver.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *pmd_perf.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *pmd_rxq.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: INFO    15:18:34 main.go:48: registering *vswitch.Collector
Nov 29 10:18:34 np0005539860 openstack_network_exporter[205595]: NOTICE  15:18:34 main.go:76: listening on https://:9105/metrics
Nov 29 10:18:34 np0005539860 podman[205564]: 2025-11-29 15:18:34.796800889 +0000 UTC m=+0.150963858 container start e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.6, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 10:18:34 np0005539860 podman[205564]: openstack_network_exporter
Nov 29 10:18:34 np0005539860 systemd[1]: Started openstack_network_exporter container.
Nov 29 10:18:34 np0005539860 podman[205608]: 2025-11-29 15:18:34.882803147 +0000 UTC m=+0.073971051 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, maintainer=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Nov 29 10:18:35 np0005539860 python3.9[205782]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:18:35 np0005539860 systemd[1]: Stopping openstack_network_exporter container...
Nov 29 10:18:35 np0005539860 systemd[1]: libpod-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope: Deactivated successfully.
Nov 29 10:18:35 np0005539860 podman[205786]: 2025-11-29 15:18:35.941144641 +0000 UTC m=+0.065449975 container died e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Nov 29 10:18:35 np0005539860 systemd[1]: e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa-f75ca4fa97267ff.timer: Deactivated successfully.
Nov 29 10:18:35 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.
Nov 29 10:18:35 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa-userdata-shm.mount: Deactivated successfully.
Nov 29 10:18:35 np0005539860 systemd[1]: var-lib-containers-storage-overlay-97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822-merged.mount: Deactivated successfully.
Nov 29 10:18:36 np0005539860 podman[205786]: 2025-11-29 15:18:36.868115554 +0000 UTC m=+0.992420848 container cleanup e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.buildah.version=1.33.7, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 29 10:18:36 np0005539860 podman[205786]: openstack_network_exporter
Nov 29 10:18:36 np0005539860 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Nov 29 10:18:36 np0005539860 podman[205813]: openstack_network_exporter
Nov 29 10:18:36 np0005539860 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Nov 29 10:18:36 np0005539860 systemd[1]: Stopped openstack_network_exporter container.
Nov 29 10:18:36 np0005539860 systemd[1]: Starting openstack_network_exporter container...
Nov 29 10:18:37 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:18:37 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:37 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:37 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/97693c66e7b41bc625c357323c34bf53c1276167b1b28da51f47acc7daad9822/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:18:37 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.
Nov 29 10:18:37 np0005539860 podman[205826]: 2025-11-29 15:18:37.134958614 +0000 UTC m=+0.147713665 container init e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9)
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *bridge.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *coverage.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *datapath.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *iface.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *memory.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *ovnnorthd.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *ovn.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *ovsdbserver.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *pmd_perf.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *pmd_rxq.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: INFO    15:18:37 main.go:48: registering *vswitch.Collector
Nov 29 10:18:37 np0005539860 openstack_network_exporter[205841]: NOTICE  15:18:37 main.go:76: listening on https://:9105/metrics
Nov 29 10:18:37 np0005539860 podman[205826]: 2025-11-29 15:18:37.170481651 +0000 UTC m=+0.183236642 container start e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, architecture=x86_64)
Nov 29 10:18:37 np0005539860 podman[205826]: openstack_network_exporter
Nov 29 10:18:37 np0005539860 systemd[1]: Started openstack_network_exporter container.
Nov 29 10:18:37 np0005539860 podman[205851]: 2025-11-29 15:18:37.290638377 +0000 UTC m=+0.101781456 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, com.redhat.component=ubi9-minimal-container, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 29 10:18:38 np0005539860 python3.9[206023]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:18:38 np0005539860 podman[206072]: 2025-11-29 15:18:38.654989002 +0000 UTC m=+0.092065153 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_managed=true, container_name=multipathd)
Nov 29 10:18:39 np0005539860 python3.9[206195]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 29 10:18:40 np0005539860 python3.9[206360]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:40 np0005539860 systemd[1]: Started libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope.
Nov 29 10:18:40 np0005539860 podman[206361]: 2025-11-29 15:18:40.547492184 +0000 UTC m=+0.142029069 container exec c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 10:18:40 np0005539860 podman[206361]: 2025-11-29 15:18:40.580333653 +0000 UTC m=+0.174870508 container exec_died c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 10:18:40 np0005539860 systemd[1]: libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope: Deactivated successfully.
Nov 29 10:18:41 np0005539860 podman[206516]: 2025-11-29 15:18:41.260477628 +0000 UTC m=+0.067754151 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:41 np0005539860 python3.9[206568]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:41 np0005539860 systemd[1]: Started libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope.
Nov 29 10:18:41 np0005539860 podman[206569]: 2025-11-29 15:18:41.563189274 +0000 UTC m=+0.094745641 container exec c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 10:18:41 np0005539860 podman[206588]: 2025-11-29 15:18:41.626854266 +0000 UTC m=+0.050818361 container exec_died c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller)
Nov 29 10:18:41 np0005539860 podman[206569]: 2025-11-29 15:18:41.632324064 +0000 UTC m=+0.163880381 container exec_died c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:18:41 np0005539860 systemd[1]: libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope: Deactivated successfully.
Nov 29 10:18:42 np0005539860 python3.9[206752]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:43 np0005539860 python3.9[206904]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 29 10:18:44 np0005539860 python3.9[207070]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:44 np0005539860 systemd[1]: Started libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope.
Nov 29 10:18:44 np0005539860 podman[207071]: 2025-11-29 15:18:44.537872681 +0000 UTC m=+0.083551598 container exec 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 10:18:44 np0005539860 podman[207071]: 2025-11-29 15:18:44.568036084 +0000 UTC m=+0.113714991 container exec_died 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:18:44 np0005539860 systemd[1]: libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope: Deactivated successfully.
Nov 29 10:18:45 np0005539860 python3.9[207255]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:45 np0005539860 systemd[1]: Started libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope.
Nov 29 10:18:45 np0005539860 podman[207256]: 2025-11-29 15:18:45.432243272 +0000 UTC m=+0.082324932 container exec 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 10:18:45 np0005539860 podman[207256]: 2025-11-29 15:18:45.467915734 +0000 UTC m=+0.117997394 container exec_died 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Nov 29 10:18:45 np0005539860 systemd[1]: libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope: Deactivated successfully.
Nov 29 10:18:46 np0005539860 python3.9[207439]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:47 np0005539860 python3.9[207591]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.905 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.938 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.938 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.938 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.953 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.953 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:48 np0005539860 nova_compute[189485]: 2025-11-29 15:18:48.954 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:49 np0005539860 python3.9[207757]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:49 np0005539860 nova_compute[189485]: 2025-11-29 15:18:49.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:49 np0005539860 nova_compute[189485]: 2025-11-29 15:18:49.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:49 np0005539860 nova_compute[189485]: 2025-11-29 15:18:49.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:49 np0005539860 systemd[1]: Started libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope.
Nov 29 10:18:49 np0005539860 podman[207758]: 2025-11-29 15:18:49.54918924 +0000 UTC m=+0.083558389 container exec 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 10:18:49 np0005539860 podman[207758]: 2025-11-29 15:18:49.580191876 +0000 UTC m=+0.114561015 container exec_died 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 10:18:49 np0005539860 systemd[1]: libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope: Deactivated successfully.
Nov 29 10:18:50 np0005539860 python3.9[207941]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:18:50 np0005539860 systemd[1]: Started libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope.
Nov 29 10:18:50 np0005539860 podman[207942]: 2025-11-29 15:18:50.519222149 +0000 UTC m=+0.090417167 container exec 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 10:18:50 np0005539860 podman[207942]: 2025-11-29 15:18:50.549790934 +0000 UTC m=+0.120985872 container exec_died 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.560 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.561 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.562 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.562 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:18:50 np0005539860 systemd[1]: libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope: Deactivated successfully.
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.752 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.753 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5883MB free_disk=72.43982696533203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.753 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.753 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.831 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.832 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.870 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.886 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.887 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 10:18:50 np0005539860 nova_compute[189485]: 2025-11-29 15:18:50.887 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.134s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:18:51 np0005539860 python3.9[208124]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:52 np0005539860 python3.9[208276]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 29 10:18:53 np0005539860 podman[208442]: 2025-11-29 15:18:53.308458741 +0000 UTC m=+0.093286269 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 10:18:53 np0005539860 python3.9[208441]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:53 np0005539860 systemd[1]: Started libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope.
Nov 29 10:18:53 np0005539860 podman[208466]: 2025-11-29 15:18:53.525009766 +0000 UTC m=+0.116241784 container exec 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 10:18:53 np0005539860 podman[208466]: 2025-11-29 15:18:53.53623087 +0000 UTC m=+0.127462788 container exec_died 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 10:18:53 np0005539860 systemd[1]: libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope: Deactivated successfully.
Nov 29 10:18:54 np0005539860 python3.9[208647]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:54 np0005539860 systemd[1]: Started libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope.
Nov 29 10:18:54 np0005539860 podman[208648]: 2025-11-29 15:18:54.557924553 +0000 UTC m=+0.101444625 container exec 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Nov 29 10:18:54 np0005539860 podman[208648]: 2025-11-29 15:18:54.59133371 +0000 UTC m=+0.134853912 container exec_died 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 10:18:54 np0005539860 systemd[1]: libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope: Deactivated successfully.
Nov 29 10:18:55 np0005539860 python3.9[208830]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:56 np0005539860 python3.9[208982]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 29 10:18:57 np0005539860 python3.9[209148]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:57 np0005539860 systemd[1]: Started libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope.
Nov 29 10:18:57 np0005539860 podman[209149]: 2025-11-29 15:18:57.345986092 +0000 UTC m=+0.110349733 container exec e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 10:18:57 np0005539860 podman[209149]: 2025-11-29 15:18:57.382100226 +0000 UTC m=+0.146463837 container exec_died e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:57 np0005539860 systemd[1]: libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope: Deactivated successfully.
Nov 29 10:18:58 np0005539860 python3.9[209332]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:18:58 np0005539860 systemd[1]: Started libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope.
Nov 29 10:18:58 np0005539860 podman[209333]: 2025-11-29 15:18:58.216972067 +0000 UTC m=+0.068672888 container exec e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 10:18:58 np0005539860 podman[209333]: 2025-11-29 15:18:58.251001111 +0000 UTC m=+0.102701892 container exec_died e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:18:58 np0005539860 systemd[1]: libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope: Deactivated successfully.
Nov 29 10:18:59 np0005539860 python3.9[209516]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:18:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:18:59.141 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:18:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:18:59.143 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:18:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:18:59.143 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:18:59 np0005539860 python3.9[209668]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 29 10:19:00 np0005539860 python3.9[209834]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:19:00 np0005539860 systemd[1]: Started libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope.
Nov 29 10:19:00 np0005539860 podman[209835]: 2025-11-29 15:19:00.886773274 +0000 UTC m=+0.085873725 container exec 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:19:00 np0005539860 podman[209835]: 2025-11-29 15:19:00.918373288 +0000 UTC m=+0.117473719 container exec_died 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 10:19:00 np0005539860 systemd[1]: libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope: Deactivated successfully.
Nov 29 10:19:00 np0005539860 podman[209853]: 2025-11-29 15:19:00.988007292 +0000 UTC m=+0.094767032 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 10:19:01 np0005539860 python3.9[210039]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:19:01 np0005539860 systemd[1]: Started libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope.
Nov 29 10:19:01 np0005539860 podman[210040]: 2025-11-29 15:19:01.89644739 +0000 UTC m=+0.112094103 container exec 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:19:01 np0005539860 podman[210040]: 2025-11-29 15:19:01.932981397 +0000 UTC m=+0.148628090 container exec_died 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:19:01 np0005539860 systemd[1]: libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope: Deactivated successfully.
Nov 29 10:19:02 np0005539860 python3.9[210224]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:03 np0005539860 python3.9[210376]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 29 10:19:04 np0005539860 python3.9[210542]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:19:04 np0005539860 systemd[1]: Started libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope.
Nov 29 10:19:04 np0005539860 podman[210543]: 2025-11-29 15:19:04.679034812 +0000 UTC m=+0.089900437 container exec e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., architecture=x86_64, version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 29 10:19:04 np0005539860 podman[210543]: 2025-11-29 15:19:04.691010863 +0000 UTC m=+0.101876468 container exec_died e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, config_id=edpm)
Nov 29 10:19:04 np0005539860 systemd[1]: libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope: Deactivated successfully.
Nov 29 10:19:04 np0005539860 podman[210561]: 2025-11-29 15:19:04.793586349 +0000 UTC m=+0.109633136 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 10:19:04 np0005539860 podman[210601]: 2025-11-29 15:19:04.867542888 +0000 UTC m=+0.062082493 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 10:19:05 np0005539860 python3.9[210770]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:19:05 np0005539860 systemd[1]: Started libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope.
Nov 29 10:19:05 np0005539860 podman[210771]: 2025-11-29 15:19:05.635815301 +0000 UTC m=+0.090675087 container exec e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 29 10:19:05 np0005539860 podman[210771]: 2025-11-29 15:19:05.669690069 +0000 UTC m=+0.124549775 container exec_died e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, name=ubi9-minimal, version=9.6, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, distribution-scope=public)
Nov 29 10:19:05 np0005539860 systemd[1]: libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope: Deactivated successfully.
Nov 29 10:19:06 np0005539860 python3.9[210955]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:07 np0005539860 podman[211079]: 2025-11-29 15:19:07.45111142 +0000 UTC m=+0.094346047 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, name=ubi9-minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Nov 29 10:19:07 np0005539860 python3.9[211123]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:08 np0005539860 python3.9[211277]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:08 np0005539860 podman[211372]: 2025-11-29 15:19:08.85998367 +0000 UTC m=+0.063749657 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 10:19:09 np0005539860 python3.9[211417]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429547.9415863-1082-8563740346226/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:09 np0005539860 python3.9[211570]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:10 np0005539860 python3.9[211722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:11 np0005539860 python3.9[211800]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:11 np0005539860 podman[211904]: 2025-11-29 15:19:11.62127695 +0000 UTC m=+0.073274502 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:19:11 np0005539860 python3.9[211977]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:12 np0005539860 python3.9[212055]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.7p5e8a3_ recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:13 np0005539860 python3.9[212207]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:13 np0005539860 python3.9[212285]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:14 np0005539860 python3.9[212437]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:19:15 np0005539860 python3[212590]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 10:19:16 np0005539860 python3.9[212742]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:16 np0005539860 python3.9[212820]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:17 np0005539860 python3.9[212972]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:18 np0005539860 python3.9[213050]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:19 np0005539860 python3.9[213202]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:19 np0005539860 python3.9[213280]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:20 np0005539860 python3.9[213432]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:21 np0005539860 python3.9[213510]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:22 np0005539860 python3.9[213662]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:22 np0005539860 python3.9[213787]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429561.2801585-1207-131380470244807/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:23 np0005539860 podman[213939]: 2025-11-29 15:19:23.500534872 +0000 UTC m=+0.070684883 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:19:23 np0005539860 python3.9[213940]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:24 np0005539860 python3.9[214115]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:19:25 np0005539860 python3.9[214270]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:26 np0005539860 python3.9[214422]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:19:27 np0005539860 python3.9[214575]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:19:27 np0005539860 python3.9[214729]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:19:28 np0005539860 python3.9[214884]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:29 np0005539860 systemd[1]: session-25.scope: Deactivated successfully.
Nov 29 10:19:29 np0005539860 systemd[1]: session-25.scope: Consumed 1min 52.950s CPU time.
Nov 29 10:19:29 np0005539860 systemd-logind[794]: Session 25 logged out. Waiting for processes to exit.
Nov 29 10:19:29 np0005539860 systemd-logind[794]: Removed session 25.
Nov 29 10:19:29 np0005539860 podman[203677]: time="2025-11-29T15:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:19:29 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Nov 29 10:19:29 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3404 "" "Go-http-client/1.1"
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:19:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:19:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 10:19:31 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:19:31 np0005539860 podman[214917]: 2025-11-29 15:19:31.662967729 +0000 UTC m=+0.111087984 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 10:19:35 np0005539860 systemd-logind[794]: New session 26 of user zuul.
Nov 29 10:19:35 np0005539860 systemd[1]: Started Session 26 of User zuul.
Nov 29 10:19:35 np0005539860 podman[214942]: 2025-11-29 15:19:35.31759623 +0000 UTC m=+0.081395449 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:19:35 np0005539860 podman[214944]: 2025-11-29 15:19:35.367421994 +0000 UTC m=+0.128462540 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:19:36 np0005539860 python3.9[215141]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:19:36 np0005539860 systemd[1]: Reloading.
Nov 29 10:19:36 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:19:36 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:19:37 np0005539860 podman[215327]: 2025-11-29 15:19:37.623154992 +0000 UTC m=+0.072306987 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal)
Nov 29 10:19:37 np0005539860 python3.9[215326]: ansible-ansible.builtin.service_facts Invoked
Nov 29 10:19:37 np0005539860 network[215364]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Nov 29 10:19:37 np0005539860 network[215365]: 'network-scripts' will be removed from distribution in near future.
Nov 29 10:19:37 np0005539860 network[215366]: It is advised to switch to 'NetworkManager' instead for network management.
Nov 29 10:19:39 np0005539860 podman[215384]: 2025-11-29 15:19:39.020442812 +0000 UTC m=+0.080497575 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:19:42 np0005539860 podman[215631]: 2025-11-29 15:19:42.316054823 +0000 UTC m=+0.094169121 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 10:19:42 np0005539860 python3.9[215683]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:19:43 np0005539860 python3.9[215836]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:44 np0005539860 python3.9[215990]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:45 np0005539860 python3.9[216142]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:19:47 np0005539860 python3.9[216294]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:19:48 np0005539860 python3.9[216446]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:19:48 np0005539860 systemd[1]: Reloading.
Nov 29 10:19:48 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:19:48 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:19:48 np0005539860 nova_compute[189485]: 2025-11-29 15:19:48.888 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:49 np0005539860 nova_compute[189485]: 2025-11-29 15:19:49.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:49 np0005539860 nova_compute[189485]: 2025-11-29 15:19:49.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 10:19:49 np0005539860 nova_compute[189485]: 2025-11-29 15:19:49.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 10:19:49 np0005539860 python3.9[216633]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:19:49 np0005539860 nova_compute[189485]: 2025-11-29 15:19:49.849 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 10:19:49 np0005539860 nova_compute[189485]: 2025-11-29 15:19:49.850 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:50 np0005539860 nova_compute[189485]: 2025-11-29 15:19:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:50 np0005539860 nova_compute[189485]: 2025-11-29 15:19:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:50 np0005539860 nova_compute[189485]: 2025-11-29 15:19:50.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:50 np0005539860 nova_compute[189485]: 2025-11-29 15:19:50.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:50 np0005539860 python3.9[216786]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:19:51 np0005539860 python3.9[216936]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.528 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.528 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.529 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:19:52 np0005539860 python3.9[217088]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.707 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.708 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5887MB free_disk=72.43936157226562GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.708 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.708 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.789 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.790 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.816 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.830 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.831 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 10:19:52 np0005539860 nova_compute[189485]: 2025-11-29 15:19:52.831 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.124s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:19:53 np0005539860 python3.9[217209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429592.0222044-125-241038155108852/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:19:53 np0005539860 podman[217235]: 2025-11-29 15:19:53.635539832 +0000 UTC m=+0.082052057 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 10:19:54 np0005539860 python3.9[217386]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Nov 29 10:19:55 np0005539860 python3.9[217537]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:56 np0005539860 python3.9[217658]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429595.4454887-171-181642169972588/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:57 np0005539860 python3.9[217808]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:57 np0005539860 python3.9[217929]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429596.893993-171-179328693714790/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:58 np0005539860 python3.9[218079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:19:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:19:59.142 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:19:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:19:59.142 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:19:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:19:59.143 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:19:59 np0005539860 python3.9[218200]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429598.186304-171-139987082135727/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:19:59 np0005539860 podman[203677]: time="2025-11-29T15:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:19:59 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Nov 29 10:19:59 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3419 "" "Go-http-client/1.1"
Nov 29 10:20:00 np0005539860 python3.9[218351]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:20:00 np0005539860 python3.9[218503]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.045 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.046 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.046 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.046 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.048 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f56000>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.050 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.054 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.055 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.056 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:20:01.057 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 10:20:01 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:20:01 np0005539860 python3.9[218658]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:02 np0005539860 podman[218753]: 2025-11-29 15:20:02.218525605 +0000 UTC m=+0.075473791 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0)
Nov 29 10:20:02 np0005539860 python3.9[218798]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429601.1903768-230-3035443853385/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:03 np0005539860 python3.9[218950]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:03 np0005539860 python3.9[219026]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:04 np0005539860 python3.9[219176]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:04 np0005539860 python3.9[219297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429603.8619707-230-165116232642378/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:05 np0005539860 podman[219421]: 2025-11-29 15:20:05.551456505 +0000 UTC m=+0.097134841 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Nov 29 10:20:05 np0005539860 podman[219422]: 2025-11-29 15:20:05.592475173 +0000 UTC m=+0.141683513 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 10:20:05 np0005539860 python3.9[219478]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:06 np0005539860 python3.9[219614]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429605.1425998-230-260125044927261/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:07 np0005539860 python3.9[219764]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:07 np0005539860 python3.9[219886]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429606.5046048-230-40855371178505/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:07 np0005539860 podman[219887]: 2025-11-29 15:20:07.732688119 +0000 UTC m=+0.053921685 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, release=1755695350, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Nov 29 10:20:08 np0005539860 python3.9[220058]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:08 np0005539860 python3.9[220179]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429607.853355-230-162419829220443/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:09 np0005539860 podman[220279]: 2025-11-29 15:20:09.658373363 +0000 UTC m=+0.096014237 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd)
Nov 29 10:20:09 np0005539860 python3.9[220349]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:10 np0005539860 python3.9[220425]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:11 np0005539860 python3.9[220577]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:11 np0005539860 python3.9[220729]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:12 np0005539860 podman[220853]: 2025-11-29 15:20:12.513788143 +0000 UTC m=+0.050318491 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 10:20:12 np0005539860 python3.9[220905]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:20:13 np0005539860 python3.9[221057]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:14 np0005539860 python3.9[221180]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429612.971916-349-218487059237777/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:20:14 np0005539860 python3.9[221256]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:15 np0005539860 python3.9[221379]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429612.971916-349-218487059237777/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:20:16 np0005539860 python3.9[221531]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:20:16 np0005539860 python3.9[221654]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1764429615.4831104-349-139217437652559/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Nov 29 10:20:17 np0005539860 python3.9[221806]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Nov 29 10:20:18 np0005539860 python3.9[221958]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:20:20 np0005539860 python3[222110]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:20:20 np0005539860 podman[222148]: 2025-11-29 15:20:20.614808463 +0000 UTC m=+0.063350981 container create 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 10:20:20 np0005539860 podman[222148]: 2025-11-29 15:20:20.588348613 +0000 UTC m=+0.036891161 image pull 743c1960518ee2a8df257b87dd40a31faa57a99c6d0aa394baae4cd418c3c2b2 quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Nov 29 10:20:20 np0005539860 python3[222110]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Nov 29 10:20:21 np0005539860 python3.9[222337]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:20:22 np0005539860 python3.9[222491]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:23 np0005539860 python3.9[222642]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429622.7502909-427-131074540349188/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:24 np0005539860 podman[222690]: 2025-11-29 15:20:24.162911438 +0000 UTC m=+0.071468139 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:20:24 np0005539860 python3.9[222739]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:20:24 np0005539860 systemd[1]: Reloading.
Nov 29 10:20:24 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:20:24 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:20:25 np0005539860 python3.9[222850]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:20:25 np0005539860 systemd[1]: Reloading.
Nov 29 10:20:25 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:20:25 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:20:25 np0005539860 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 29 10:20:25 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:20:25 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:25 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:25 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:25 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:25 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.
Nov 29 10:20:25 np0005539860 podman[222890]: 2025-11-29 15:20:25.897546618 +0000 UTC m=+0.176166698 container init 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 10:20:25 np0005539860 ceilometer_agent_ipmi[222906]: + sudo -E kolla_set_configs
Nov 29 10:20:25 np0005539860 podman[222890]: 2025-11-29 15:20:25.947760405 +0000 UTC m=+0.226380505 container start 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi)
Nov 29 10:20:25 np0005539860 podman[222890]: ceilometer_agent_ipmi
Nov 29 10:20:25 np0005539860 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Validating config file
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Copying service configuration files
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: INFO:__main__:Writing out command to execute
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: ++ cat /run_command
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + ARGS=
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + sudo kolla_copy_cacerts
Nov 29 10:20:26 np0005539860 podman[222913]: 2025-11-29 15:20:26.043859664 +0000 UTC m=+0.077784688 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 10:20:26 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-f43ba605d288ee7.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:20:26 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-f43ba605d288ee7.service: Failed with result 'exit-code'.
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + [[ ! -n '' ]]
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + . kolla_extend_start
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + umask 0022
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.892 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.892 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.892 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.892 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.893 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.894 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.895 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.896 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.897 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.898 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.899 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.900 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.901 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.902 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.903 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.904 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.905 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.906 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.907 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.927 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.929 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 29 10:20:26 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:26.931 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 29 10:20:26 np0005539860 python3.9[223087]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.043 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpgb33gedv/privsep.sock']
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.681 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.682 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpgb33gedv/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.579 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.582 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.584 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.584 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 29 10:20:27 np0005539860 python3.9[223247]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.784 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.784 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.785 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.785 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.785 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.785 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.785 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.785 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.786 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.786 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.786 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.786 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.786 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.789 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.790 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.791 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.792 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.793 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.794 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.795 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.796 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.797 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.798 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.799 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.800 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.801 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.802 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.803 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.804 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.805 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.806 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.807 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.808 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.808 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.808 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 29 10:20:27 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:27.811 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 29 10:20:28 np0005539860 python3[223404]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Nov 29 10:20:28 np0005539860 podman[223442]: 2025-11-29 15:20:28.887848008 +0000 UTC m=+0.057417662 container create 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, container_name=kepler, maintainer=Red Hat, Inc., config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64)
Nov 29 10:20:28 np0005539860 podman[223442]: 2025-11-29 15:20:28.855748197 +0000 UTC m=+0.025317911 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Nov 29 10:20:28 np0005539860 python3[223404]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Nov 29 10:20:29 np0005539860 podman[203677]: time="2025-11-29T15:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:20:29 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28286 "" "Go-http-client/1.1"
Nov 29 10:20:29 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Nov 29 10:20:29 np0005539860 python3.9[223632]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:20:30 np0005539860 python3.9[223788]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 10:20:31 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:20:31 np0005539860 python3.9[223939]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1764429630.6509287-489-261764594229829/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:32 np0005539860 python3.9[224015]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Nov 29 10:20:32 np0005539860 systemd[1]: Reloading.
Nov 29 10:20:32 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:20:32 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:20:32 np0005539860 podman[224051]: 2025-11-29 15:20:32.536892271 +0000 UTC m=+0.058055998 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:20:33 np0005539860 python3.9[224146]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Nov 29 10:20:33 np0005539860 systemd[1]: Reloading.
Nov 29 10:20:33 np0005539860 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Nov 29 10:20:33 np0005539860 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Nov 29 10:20:33 np0005539860 systemd[1]: Starting kepler container...
Nov 29 10:20:33 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:20:33 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.
Nov 29 10:20:33 np0005539860 podman[224185]: 2025-11-29 15:20:33.535542795 +0000 UTC m=+0.140546212 container init 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, config_id=edpm, io.buildah.version=1.29.0, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-container, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9)
Nov 29 10:20:33 np0005539860 kepler[224200]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 29 10:20:33 np0005539860 podman[224185]: 2025-11-29 15:20:33.566956178 +0000 UTC m=+0.171959645 container start 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, vcs-type=git, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Nov 29 10:20:33 np0005539860 podman[224185]: kepler
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.574056       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.574229       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.574257       1 config.go:295] kernel version: 5.14
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.575136       1 power.go:78] Unable to obtain power, use estimate method
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.575162       1 redfish.go:169] failed to get redfish credential file path
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.575638       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.575672       1 power.go:79] using none to obtain power
Nov 29 10:20:33 np0005539860 kepler[224200]: E1129 15:20:33.575690       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 29 10:20:33 np0005539860 kepler[224200]: E1129 15:20:33.575713       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 29 10:20:33 np0005539860 kepler[224200]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 29 10:20:33 np0005539860 kepler[224200]: I1129 15:20:33.577870       1 exporter.go:84] Number of CPUs: 8
Nov 29 10:20:33 np0005539860 systemd[1]: Started kepler container.
Nov 29 10:20:33 np0005539860 podman[224206]: 2025-11-29 15:20:33.638255751 +0000 UTC m=+0.061519202 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-type=git, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, vendor=Red Hat, Inc.)
Nov 29 10:20:33 np0005539860 systemd[1]: 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da-443390e5603e586b.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:20:33 np0005539860 systemd[1]: 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da-443390e5603e586b.service: Failed with result 'exit-code'.
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.139748       1 watcher.go:83] Using in cluster k8s config
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.140149       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 29 10:20:34 np0005539860 kepler[224200]: E1129 15:20:34.140853       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.150303       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.150645       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.157048       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.157384       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.168318       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.168536       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.168782       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.178112       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.178338       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.178538       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.178790       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.179019       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.179223       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.179529       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.179857       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.180090       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.180343       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.180613       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 29 10:20:34 np0005539860 kepler[224200]: I1129 15:20:34.181335       1 exporter.go:208] Started Kepler in 607.569492ms
Nov 29 10:20:34 np0005539860 python3.9[224387]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:20:34 np0005539860 systemd[1]: Stopping ceilometer_agent_ipmi container...
Nov 29 10:20:34 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:34.543 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Nov 29 10:20:34 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:34.645 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Nov 29 10:20:34 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:34.646 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Nov 29 10:20:34 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:34.646 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Nov 29 10:20:34 np0005539860 ceilometer_agent_ipmi[222906]: 2025-11-29 15:20:34.662 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Nov 29 10:20:34 np0005539860 systemd[1]: libpod-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.scope: Deactivated successfully.
Nov 29 10:20:34 np0005539860 systemd[1]: libpod-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.scope: Consumed 2.175s CPU time.
Nov 29 10:20:34 np0005539860 podman[224401]: 2025-11-29 15:20:34.840015004 +0000 UTC m=+0.353221478 container died 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 10:20:34 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-f43ba605d288ee7.timer: Deactivated successfully.
Nov 29 10:20:34 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.
Nov 29 10:20:34 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-userdata-shm.mount: Deactivated successfully.
Nov 29 10:20:34 np0005539860 systemd[1]: var-lib-containers-storage-overlay-878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9-merged.mount: Deactivated successfully.
Nov 29 10:20:34 np0005539860 podman[224401]: 2025-11-29 15:20:34.909851698 +0000 UTC m=+0.423058172 container cleanup 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 10:20:34 np0005539860 podman[224401]: ceilometer_agent_ipmi
Nov 29 10:20:35 np0005539860 podman[224428]: ceilometer_agent_ipmi
Nov 29 10:20:35 np0005539860 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Nov 29 10:20:35 np0005539860 systemd[1]: Stopped ceilometer_agent_ipmi container.
Nov 29 10:20:35 np0005539860 systemd[1]: Starting ceilometer_agent_ipmi container...
Nov 29 10:20:35 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:20:35 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:35 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:35 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:35 np0005539860 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/878cca9e17ae56699fb807cfc78044546d65b3c2f2ad67f32c74c088ea4b5ff9/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Nov 29 10:20:35 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.
Nov 29 10:20:35 np0005539860 podman[224440]: 2025-11-29 15:20:35.360974271 +0000 UTC m=+0.310092840 container init 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + sudo -E kolla_set_configs
Nov 29 10:20:35 np0005539860 podman[224440]: 2025-11-29 15:20:35.400940514 +0000 UTC m=+0.350059013 container start 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Nov 29 10:20:35 np0005539860 podman[224440]: ceilometer_agent_ipmi
Nov 29 10:20:35 np0005539860 systemd[1]: Started ceilometer_agent_ipmi container.
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Validating config file
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Copying service configuration files
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: INFO:__main__:Writing out command to execute
Nov 29 10:20:35 np0005539860 podman[224462]: 2025-11-29 15:20:35.49848501 +0000 UTC m=+0.077258513 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, config_id=edpm, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: ++ cat /run_command
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 29 10:20:35 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-6002aa3ddc17e59.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + ARGS=
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + sudo kolla_copy_cacerts
Nov 29 10:20:35 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-6002aa3ddc17e59.service: Failed with result 'exit-code'.
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + [[ ! -n '' ]]
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + . kolla_extend_start
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + umask 0022
Nov 29 10:20:35 np0005539860 ceilometer_agent_ipmi[224454]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Nov 29 10:20:36 np0005539860 podman[224608]: 2025-11-29 15:20:36.232579097 +0000 UTC m=+0.110827336 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:20:36 np0005539860 podman[224609]: 2025-11-29 15:20:36.309360506 +0000 UTC m=+0.183161605 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.324 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.324 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.324 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.324 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.325 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.326 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.327 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.328 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.329 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.330 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.331 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.332 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.333 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.334 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.335 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.336 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.337 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.338 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.339 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.344 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.345 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.345 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.345 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.346 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.346 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.346 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.370 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.372 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.373 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Nov 29 10:20:36 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.398 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpvda7fz4g/privsep.sock']
Nov 29 10:20:36 np0005539860 python3.9[224669]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 10:20:36 np0005539860 systemd[1]: Stopping kepler container...
Nov 29 10:20:36 np0005539860 kepler[224200]: I1129 15:20:36.697626       1 exporter.go:218] Received shutdown signal
Nov 29 10:20:36 np0005539860 kepler[224200]: I1129 15:20:36.698048       1 exporter.go:226] Exiting...
Nov 29 10:20:36 np0005539860 systemd[1]: libpod-327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.scope: Deactivated successfully.
Nov 29 10:20:36 np0005539860 podman[224687]: 2025-11-29 15:20:36.897866586 +0000 UTC m=+0.277367773 container died 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, vcs-type=git, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 29 10:20:36 np0005539860 systemd[1]: 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da-443390e5603e586b.timer: Deactivated successfully.
Nov 29 10:20:36 np0005539860 systemd[1]: Stopped /usr/bin/podman healthcheck run 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.
Nov 29 10:20:36 np0005539860 systemd[1]: var-lib-containers-storage-overlay-67e97c5a2c64fe5f126154e5f211072b377308b92a9941ab3703304d6a44973f-merged.mount: Deactivated successfully.
Nov 29 10:20:36 np0005539860 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da-userdata-shm.mount: Deactivated successfully.
Nov 29 10:20:36 np0005539860 podman[224687]: 2025-11-29 15:20:36.942493634 +0000 UTC m=+0.321994821 container cleanup 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, name=ubi9, architecture=x86_64, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 10:20:36 np0005539860 podman[224687]: kepler
Nov 29 10:20:37 np0005539860 podman[224717]: kepler
Nov 29 10:20:37 np0005539860 systemd[1]: edpm_kepler.service: Deactivated successfully.
Nov 29 10:20:37 np0005539860 systemd[1]: Stopped kepler container.
Nov 29 10:20:37 np0005539860 systemd[1]: Starting kepler container...
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.045 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.045 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpvda7fz4g/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.936 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.943 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.947 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:36.955 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Nov 29 10:20:37 np0005539860 systemd[1]: Started libcrun container.
Nov 29 10:20:37 np0005539860 systemd[1]: Started /usr/bin/podman healthcheck run 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.
Nov 29 10:20:37 np0005539860 podman[224728]: 2025-11-29 15:20:37.127369854 +0000 UTC m=+0.091812035 container init 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, architecture=x86_64, distribution-scope=public, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, build-date=2024-09-18T21:23:30)
Nov 29 10:20:37 np0005539860 kepler[224745]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 29 10:20:37 np0005539860 podman[224728]: 2025-11-29 15:20:37.157266495 +0000 UTC m=+0.121708646 container start 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, container_name=kepler, name=ubi9, release=1214.1726694543, version=9.4, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git)
Nov 29 10:20:37 np0005539860 podman[224728]: kepler
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.165404       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.165554       1 config.go:293] using gCgroup ID in the BPF program: true
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.165587       1 config.go:295] kernel version: 5.14
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.166189       1 power.go:78] Unable to obtain power, use estimate method
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.166207       1 redfish.go:169] failed to get redfish credential file path
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.166776       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.166786       1 power.go:79] using none to obtain power
Nov 29 10:20:37 np0005539860 kepler[224745]: E1129 15:20:37.166797       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Nov 29 10:20:37 np0005539860 kepler[224745]: E1129 15:20:37.166811       1 exporter.go:154] failed to init GPU accelerators: no devices found
Nov 29 10:20:37 np0005539860 systemd[1]: Started kepler container.
Nov 29 10:20:37 np0005539860 kepler[224745]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.171 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.172 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.173 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.174 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.174 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.171759       1 exporter.go:84] Number of CPUs: 8
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.177 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.178 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.179 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.180 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.181 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.182 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.183 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.184 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.185 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.186 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.187 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.188 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.189 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.190 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.191 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.192 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.193 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.194 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.195 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.196 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.196 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.196 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.196 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.196 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Nov 29 10:20:37 np0005539860 ceilometer_agent_ipmi[224454]: 2025-11-29 15:20:37.200 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Nov 29 10:20:37 np0005539860 podman[224755]: 2025-11-29 15:20:37.287599132 +0000 UTC m=+0.105081590 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, config_id=edpm, container_name=kepler, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64)
Nov 29 10:20:37 np0005539860 systemd[1]: 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da-309e03f1eb18f43e.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:20:37 np0005539860 systemd[1]: 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da-309e03f1eb18f43e.service: Failed with result 'exit-code'.
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.711885       1 watcher.go:83] Using in cluster k8s config
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.712144       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Nov 29 10:20:37 np0005539860 kepler[224745]: E1129 15:20:37.712351       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.717200       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.717345       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.722422       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.722562       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.731723       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.731943       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.732119       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.741950       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.742168       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.742328       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.742505       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.742729       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.742922       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.743159       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.743396       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.743579       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.743862       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.744144       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Nov 29 10:20:37 np0005539860 kepler[224745]: I1129 15:20:37.744820       1 exporter.go:208] Started Kepler in 579.684973ms
Nov 29 10:20:37 np0005539860 python3.9[224930]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Nov 29 10:20:37 np0005539860 podman[224941]: 2025-11-29 15:20:37.994305013 +0000 UTC m=+0.069883776 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter)
Nov 29 10:20:39 np0005539860 python3.9[225113]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Nov 29 10:20:40 np0005539860 podman[225249]: 2025-11-29 15:20:40.373511987 +0000 UTC m=+0.154591648 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 10:20:40 np0005539860 python3.9[225295]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:40 np0005539860 systemd[1]: Started libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope.
Nov 29 10:20:40 np0005539860 podman[225297]: 2025-11-29 15:20:40.697419767 +0000 UTC m=+0.122393794 container exec c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 10:20:40 np0005539860 podman[225297]: 2025-11-29 15:20:40.733411253 +0000 UTC m=+0.158385250 container exec_died c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 10:20:40 np0005539860 systemd[1]: libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope: Deactivated successfully.
Nov 29 10:20:41 np0005539860 python3.9[225480]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:41 np0005539860 systemd[1]: Started libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope.
Nov 29 10:20:41 np0005539860 podman[225481]: 2025-11-29 15:20:41.952165323 +0000 UTC m=+0.153072779 container exec c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:20:41 np0005539860 podman[225481]: 2025-11-29 15:20:41.987283235 +0000 UTC m=+0.188190681 container exec_died c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 10:20:42 np0005539860 systemd[1]: libpod-conmon-c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b.scope: Deactivated successfully.
Nov 29 10:20:43 np0005539860 podman[225636]: 2025-11-29 15:20:43.007723964 +0000 UTC m=+0.090628673 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 10:20:43 np0005539860 python3.9[225678]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:44 np0005539860 python3.9[225842]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Nov 29 10:20:45 np0005539860 python3.9[226007]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:45 np0005539860 systemd[1]: Started libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope.
Nov 29 10:20:45 np0005539860 podman[226008]: 2025-11-29 15:20:45.298940366 +0000 UTC m=+0.121616884 container exec 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 10:20:45 np0005539860 podman[226008]: 2025-11-29 15:20:45.329921097 +0000 UTC m=+0.152597625 container exec_died 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 10:20:45 np0005539860 systemd[1]: libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope: Deactivated successfully.
Nov 29 10:20:46 np0005539860 python3.9[226191]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:46 np0005539860 systemd[1]: Started libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope.
Nov 29 10:20:46 np0005539860 podman[226192]: 2025-11-29 15:20:46.418799282 +0000 UTC m=+0.140674846 container exec 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 10:20:46 np0005539860 podman[226192]: 2025-11-29 15:20:46.450203024 +0000 UTC m=+0.172078608 container exec_died 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 10:20:46 np0005539860 systemd[1]: libpod-conmon-39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1.scope: Deactivated successfully.
Nov 29 10:20:47 np0005539860 python3.9[226372]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:48 np0005539860 nova_compute[189485]: 2025-11-29 15:20:48.832 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:49 np0005539860 python3.9[226524]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Nov 29 10:20:49 np0005539860 nova_compute[189485]: 2025-11-29 15:20:49.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:49 np0005539860 nova_compute[189485]: 2025-11-29 15:20:49.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 10:20:49 np0005539860 nova_compute[189485]: 2025-11-29 15:20:49.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 10:20:49 np0005539860 nova_compute[189485]: 2025-11-29 15:20:49.509 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 10:20:50 np0005539860 python3.9[226689]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:50 np0005539860 systemd[1]: Started libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope.
Nov 29 10:20:50 np0005539860 podman[226690]: 2025-11-29 15:20:50.157738237 +0000 UTC m=+0.088429913 container exec 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 10:20:50 np0005539860 podman[226690]: 2025-11-29 15:20:50.189843958 +0000 UTC m=+0.120535624 container exec_died 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 10:20:50 np0005539860 systemd[1]: libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope: Deactivated successfully.
Nov 29 10:20:50 np0005539860 nova_compute[189485]: 2025-11-29 15:20:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:50 np0005539860 nova_compute[189485]: 2025-11-29 15:20:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:50 np0005539860 nova_compute[189485]: 2025-11-29 15:20:50.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:50 np0005539860 nova_compute[189485]: 2025-11-29 15:20:50.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:51 np0005539860 python3.9[226871]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:51 np0005539860 systemd[1]: Started libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope.
Nov 29 10:20:51 np0005539860 podman[226872]: 2025-11-29 15:20:51.364262898 +0000 UTC m=+0.125160489 container exec 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 10:20:51 np0005539860 podman[226872]: 2025-11-29 15:20:51.397541231 +0000 UTC m=+0.158438822 container exec_died 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:20:51 np0005539860 systemd[1]: libpod-conmon-2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88.scope: Deactivated successfully.
Nov 29 10:20:52 np0005539860 python3.9[227054]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.525 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.526 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.964 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.965 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5708MB free_disk=72.44150161743164GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.966 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:20:52 np0005539860 nova_compute[189485]: 2025-11-29 15:20:52.966 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:20:53 np0005539860 nova_compute[189485]: 2025-11-29 15:20:53.033 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:20:53 np0005539860 nova_compute[189485]: 2025-11-29 15:20:53.034 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:20:53 np0005539860 nova_compute[189485]: 2025-11-29 15:20:53.060 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 10:20:53 np0005539860 nova_compute[189485]: 2025-11-29 15:20:53.076 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 10:20:53 np0005539860 nova_compute[189485]: 2025-11-29 15:20:53.078 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 10:20:53 np0005539860 nova_compute[189485]: 2025-11-29 15:20:53.078 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:20:53 np0005539860 python3.9[227206]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Nov 29 10:20:54 np0005539860 nova_compute[189485]: 2025-11-29 15:20:54.073 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:54 np0005539860 podman[227343]: 2025-11-29 15:20:54.350447127 +0000 UTC m=+0.094334722 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 10:20:54 np0005539860 nova_compute[189485]: 2025-11-29 15:20:54.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:20:54 np0005539860 nova_compute[189485]: 2025-11-29 15:20:54.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 10:20:54 np0005539860 python3.9[227394]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:54 np0005539860 systemd[1]: Started libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope.
Nov 29 10:20:54 np0005539860 podman[227395]: 2025-11-29 15:20:54.819698728 +0000 UTC m=+0.232298124 container exec 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 10:20:54 np0005539860 podman[227395]: 2025-11-29 15:20:54.853409322 +0000 UTC m=+0.266008718 container exec_died 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 10:20:54 np0005539860 systemd[1]: libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope: Deactivated successfully.
Nov 29 10:20:56 np0005539860 python3.9[227577]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:20:56 np0005539860 systemd[1]: Started libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope.
Nov 29 10:20:56 np0005539860 podman[227578]: 2025-11-29 15:20:56.414193047 +0000 UTC m=+0.263683736 container exec 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:20:56 np0005539860 podman[227578]: 2025-11-29 15:20:56.454406416 +0000 UTC m=+0.303897035 container exec_died 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 10:20:56 np0005539860 systemd[1]: libpod-conmon-83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1.scope: Deactivated successfully.
Nov 29 10:20:57 np0005539860 python3.9[227761]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:20:58 np0005539860 python3.9[227913]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Nov 29 10:20:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:20:59.143 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:20:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:20:59.144 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:20:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:20:59.144 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:20:59 np0005539860 podman[203677]: time="2025-11-29T15:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:20:59 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28293 "" "Go-http-client/1.1"
Nov 29 10:20:59 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4253 "" "Go-http-client/1.1"
Nov 29 10:20:59 np0005539860 python3.9[228078]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:00 np0005539860 systemd[1]: Started libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope.
Nov 29 10:21:00 np0005539860 podman[228079]: 2025-11-29 15:21:00.091717144 +0000 UTC m=+0.116923527 container exec e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 10:21:00 np0005539860 podman[228079]: 2025-11-29 15:21:00.123554039 +0000 UTC m=+0.148760382 container exec_died e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:21:00 np0005539860 systemd[1]: libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope: Deactivated successfully.
Nov 29 10:21:01 np0005539860 python3.9[228260]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:01 np0005539860 systemd[1]: Started libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope.
Nov 29 10:21:01 np0005539860 podman[228261]: 2025-11-29 15:21:01.257598605 +0000 UTC m=+0.125721034 container exec e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 10:21:01 np0005539860 podman[228261]: 2025-11-29 15:21:01.291550076 +0000 UTC m=+0.159672415 container exec_died e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 10:21:01 np0005539860 systemd[1]: libpod-conmon-e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22.scope: Deactivated successfully.
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 10:21:01 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:21:02 np0005539860 python3.9[228441]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:03 np0005539860 podman[228565]: 2025-11-29 15:21:03.252778976 +0000 UTC m=+0.108743049 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm)
Nov 29 10:21:03 np0005539860 python3.9[228610]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Nov 29 10:21:04 np0005539860 python3.9[228774]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:04 np0005539860 systemd[1]: Started libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope.
Nov 29 10:21:04 np0005539860 podman[228775]: 2025-11-29 15:21:04.602152339 +0000 UTC m=+0.108631545 container exec 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:21:04 np0005539860 podman[228775]: 2025-11-29 15:21:04.635831683 +0000 UTC m=+0.142310879 container exec_died 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 10:21:04 np0005539860 systemd[1]: libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope: Deactivated successfully.
Nov 29 10:21:05 np0005539860 podman[228956]: 2025-11-29 15:21:05.664287136 +0000 UTC m=+0.102702446 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Nov 29 10:21:05 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-6002aa3ddc17e59.service: Main process exited, code=exited, status=1/FAILURE
Nov 29 10:21:05 np0005539860 systemd[1]: 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf-6002aa3ddc17e59.service: Failed with result 'exit-code'.
Nov 29 10:21:05 np0005539860 python3.9[228955]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:05 np0005539860 systemd[1]: Started libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope.
Nov 29 10:21:05 np0005539860 podman[228974]: 2025-11-29 15:21:05.836304731 +0000 UTC m=+0.115191741 container exec 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 10:21:05 np0005539860 podman[228974]: 2025-11-29 15:21:05.870431707 +0000 UTC m=+0.149318767 container exec_died 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:21:05 np0005539860 systemd[1]: libpod-conmon-55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7.scope: Deactivated successfully.
Nov 29 10:21:06 np0005539860 podman[229104]: 2025-11-29 15:21:06.690603622 +0000 UTC m=+0.119222390 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 10:21:06 np0005539860 podman[229111]: 2025-11-29 15:21:06.739566146 +0000 UTC m=+0.161496304 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 10:21:06 np0005539860 python3.9[229198]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:07 np0005539860 podman[229283]: 2025-11-29 15:21:07.700021944 +0000 UTC m=+0.131146350 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, name=ubi9, release-0.7.12=, distribution-scope=public, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 29 10:21:08 np0005539860 python3.9[229368]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Nov 29 10:21:08 np0005539860 podman[229451]: 2025-11-29 15:21:08.67228827 +0000 UTC m=+0.110358442 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc.)
Nov 29 10:21:09 np0005539860 python3.9[229553]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:09 np0005539860 systemd[1]: Started libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope.
Nov 29 10:21:09 np0005539860 podman[229554]: 2025-11-29 15:21:09.257043029 +0000 UTC m=+0.113006723 container exec e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9)
Nov 29 10:21:09 np0005539860 podman[229554]: 2025-11-29 15:21:09.290481856 +0000 UTC m=+0.146445540 container exec_died e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc.)
Nov 29 10:21:09 np0005539860 systemd[1]: libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope: Deactivated successfully.
Nov 29 10:21:10 np0005539860 python3.9[229735]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:10 np0005539860 systemd[1]: Started libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope.
Nov 29 10:21:10 np0005539860 podman[229736]: 2025-11-29 15:21:10.252265991 +0000 UTC m=+0.079024502 container exec e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, config_id=edpm)
Nov 29 10:21:10 np0005539860 podman[229736]: 2025-11-29 15:21:10.28804065 +0000 UTC m=+0.114799171 container exec_died e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.openshift.expose-services=)
Nov 29 10:21:10 np0005539860 systemd[1]: libpod-conmon-e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa.scope: Deactivated successfully.
Nov 29 10:21:10 np0005539860 podman[229790]: 2025-11-29 15:21:10.685992898 +0000 UTC m=+0.131016366 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 10:21:11 np0005539860 python3.9[229937]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:12 np0005539860 python3.9[230089]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Nov 29 10:21:13 np0005539860 podman[230224]: 2025-11-29 15:21:13.308015203 +0000 UTC m=+0.089577937 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 10:21:13 np0005539860 python3.9[230267]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:13 np0005539860 systemd[1]: Started libpod-conmon-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.scope.
Nov 29 10:21:13 np0005539860 podman[230275]: 2025-11-29 15:21:13.650973916 +0000 UTC m=+0.127090270 container exec 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 10:21:13 np0005539860 podman[230275]: 2025-11-29 15:21:13.684714208 +0000 UTC m=+0.160830522 container exec_died 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 10:21:13 np0005539860 systemd[1]: libpod-conmon-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.scope: Deactivated successfully.
Nov 29 10:21:14 np0005539860 python3.9[230458]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:14 np0005539860 systemd[1]: Started libpod-conmon-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.scope.
Nov 29 10:21:14 np0005539860 podman[230459]: 2025-11-29 15:21:14.962805833 +0000 UTC m=+0.131050005 container exec 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 10:21:14 np0005539860 podman[230459]: 2025-11-29 15:21:14.995520429 +0000 UTC m=+0.163764571 container exec_died 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Nov 29 10:21:15 np0005539860 systemd[1]: libpod-conmon-6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf.scope: Deactivated successfully.
Nov 29 10:21:16 np0005539860 python3.9[230639]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:17 np0005539860 python3.9[230791]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Nov 29 10:21:18 np0005539860 python3.9[230956]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:18 np0005539860 systemd[1]: Started libpod-conmon-327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.scope.
Nov 29 10:21:18 np0005539860 podman[230957]: 2025-11-29 15:21:18.508532952 +0000 UTC m=+0.124570283 container exec 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9)
Nov 29 10:21:18 np0005539860 podman[230957]: 2025-11-29 15:21:18.542518621 +0000 UTC m=+0.158555962 container exec_died 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, release-0.7.12=)
Nov 29 10:21:18 np0005539860 systemd[1]: libpod-conmon-327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.scope: Deactivated successfully.
Nov 29 10:21:19 np0005539860 python3.9[231138]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Nov 29 10:21:19 np0005539860 systemd[1]: Started libpod-conmon-327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.scope.
Nov 29 10:21:19 np0005539860 podman[231139]: 2025-11-29 15:21:19.844375911 +0000 UTC m=+0.132985368 container exec 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, version=9.4, distribution-scope=public, vendor=Red Hat, Inc., container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0)
Nov 29 10:21:19 np0005539860 podman[231139]: 2025-11-29 15:21:19.879055968 +0000 UTC m=+0.167665405 container exec_died 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, config_id=edpm, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-type=git, version=9.4, io.buildah.version=1.29.0)
Nov 29 10:21:19 np0005539860 systemd[1]: libpod-conmon-327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da.scope: Deactivated successfully.
Nov 29 10:21:21 np0005539860 python3.9[231322]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:22 np0005539860 python3.9[231474]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:23 np0005539860 python3.9[231626]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:24 np0005539860 python3.9[231749]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1764429682.7577114-844-97446524502802/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:24 np0005539860 podman[231774]: 2025-11-29 15:21:24.665315227 +0000 UTC m=+0.111395561 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 10:21:25 np0005539860 python3.9[231925]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:26 np0005539860 python3.9[232077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:27 np0005539860 python3.9[232155]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:28 np0005539860 python3.9[232307]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:28 np0005539860 python3.9[232385]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.1sob_d3r recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:29 np0005539860 podman[203677]: time="2025-11-29T15:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:21:29 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Nov 29 10:21:29 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4259 "" "Go-http-client/1.1"
Nov 29 10:21:29 np0005539860 python3.9[232537]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:30 np0005539860 python3.9[232615]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: ERROR   15:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 10:21:31 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:21:31 np0005539860 python3.9[232767]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:21:32 np0005539860 python3[232920]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Nov 29 10:21:33 np0005539860 podman[233018]: 2025-11-29 15:21:33.65637519 +0000 UTC m=+0.090022449 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 29 10:21:34 np0005539860 python3.9[233090]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:34 np0005539860 python3.9[233168]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:35 np0005539860 python3.9[233320]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:36 np0005539860 podman[233370]: 2025-11-29 15:21:36.204500744 +0000 UTC m=+0.116876576 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 10:21:36 np0005539860 python3.9[233417]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:37 np0005539860 podman[233541]: 2025-11-29 15:21:37.321501631 +0000 UTC m=+0.146744226 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 10:21:37 np0005539860 podman[233542]: 2025-11-29 15:21:37.361485151 +0000 UTC m=+0.182892553 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 10:21:37 np0005539860 python3.9[233603]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:38 np0005539860 podman[233660]: 2025-11-29 15:21:38.024230117 +0000 UTC m=+0.134892358 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, io.openshift.expose-services=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., architecture=x86_64, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, config_id=edpm, container_name=kepler)
Nov 29 10:21:38 np0005539860 python3.9[233707]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:39 np0005539860 podman[233831]: 2025-11-29 15:21:39.117170491 +0000 UTC m=+0.136153804 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, version=9.6, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41)
Nov 29 10:21:39 np0005539860 python3.9[233880]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:39 np0005539860 python3.9[233958]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:40 np0005539860 podman[234082]: 2025-11-29 15:21:40.948541724 +0000 UTC m=+0.137727905 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true)
Nov 29 10:21:41 np0005539860 python3.9[234130]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:21:41 np0005539860 python3.9[234255]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1764429700.2857487-969-126113420461914/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:43 np0005539860 python3.9[234407]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:43 np0005539860 podman[234461]: 2025-11-29 15:21:43.671625627 +0000 UTC m=+0.113989189 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:21:44 np0005539860 python3.9[234581]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:21:45 np0005539860 python3.9[234736]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:46 np0005539860 python3.9[234888]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:21:47 np0005539860 python3.9[235041]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Nov 29 10:21:48 np0005539860 nova_compute[189485]: 2025-11-29 15:21:48.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:48 np0005539860 nova_compute[189485]: 2025-11-29 15:21:48.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 10:21:48 np0005539860 nova_compute[189485]: 2025-11-29 15:21:48.516 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 10:21:48 np0005539860 nova_compute[189485]: 2025-11-29 15:21:48.517 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:48 np0005539860 nova_compute[189485]: 2025-11-29 15:21:48.517 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 10:21:48 np0005539860 nova_compute[189485]: 2025-11-29 15:21:48.538 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:48 np0005539860 python3.9[235195]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 10:21:49 np0005539860 nova_compute[189485]: 2025-11-29 15:21:49.549 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:49 np0005539860 nova_compute[189485]: 2025-11-29 15:21:49.549 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 10:21:49 np0005539860 nova_compute[189485]: 2025-11-29 15:21:49.549 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 10:21:49 np0005539860 nova_compute[189485]: 2025-11-29 15:21:49.605 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 10:21:50 np0005539860 python3.9[235351]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:21:50 np0005539860 nova_compute[189485]: 2025-11-29 15:21:50.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:50 np0005539860 nova_compute[189485]: 2025-11-29 15:21:50.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:50 np0005539860 systemd[1]: session-26.scope: Deactivated successfully.
Nov 29 10:21:50 np0005539860 systemd[1]: session-26.scope: Consumed 1min 44.281s CPU time.
Nov 29 10:21:50 np0005539860 systemd-logind[794]: Session 26 logged out. Waiting for processes to exit.
Nov 29 10:21:50 np0005539860 systemd-logind[794]: Removed session 26.
Nov 29 10:21:51 np0005539860 nova_compute[189485]: 2025-11-29 15:21:51.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:21:52 np0005539860 nova_compute[189485]: 2025-11-29 15:21:52.527 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.014 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.016 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5664MB free_disk=72.4414291381836GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.016 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.016 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.222 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.223 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.413 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.559 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.559 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.590 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.623 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.646 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.671 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.674 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 10:21:53 np0005539860 nova_compute[189485]: 2025-11-29 15:21:53.675 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.659s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:21:55 np0005539860 systemd-logind[794]: New session 27 of user zuul.
Nov 29 10:21:55 np0005539860 systemd[1]: Started Session 27 of User zuul.
Nov 29 10:21:55 np0005539860 nova_compute[189485]: 2025-11-29 15:21:55.676 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:55 np0005539860 nova_compute[189485]: 2025-11-29 15:21:55.677 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 10:21:55 np0005539860 nova_compute[189485]: 2025-11-29 15:21:55.677 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 10:21:55 np0005539860 podman[235378]: 2025-11-29 15:21:55.691019111 +0000 UTC m=+0.124676225 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 10:21:57 np0005539860 python3.9[235552]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 10:21:58 np0005539860 python3.9[235708]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Nov 29 10:21:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:21:59.145 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 10:21:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:21:59.146 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 10:21:59 np0005539860 ovn_metadata_agent[106708]: 2025-11-29 15:21:59.146 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 10:21:59 np0005539860 podman[203677]: time="2025-11-29T15:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 10:21:59 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 10:21:59 np0005539860 podman[203677]: @ - - [29/Nov/2025:15:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Nov 29 10:22:00 np0005539860 python3.9[235861]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.046 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.047 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.047 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': [], 'disk.ephemeral.size': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 ceilometer_agent_compute[200190]: 2025-11-29 15:22:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 10:22:01 np0005539860 python3.9[235946]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:22:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:22:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: ERROR   15:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 10:22:01 np0005539860 openstack_network_exporter[205841]: 
Nov 29 10:22:04 np0005539860 podman[236052]: 2025-11-29 15:22:04.714567704 +0000 UTC m=+0.153235590 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 10:22:05 np0005539860 python3.9[236124]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:22:06 np0005539860 python3.9[236247]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429724.1267657-54-143690012842960/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:22:06 np0005539860 podman[236307]: 2025-11-29 15:22:06.695440766 +0000 UTC m=+0.139652166 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 10:22:07 np0005539860 python3.9[236420]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:22:07 np0005539860 podman[236444]: 2025-11-29 15:22:07.688763005 +0000 UTC m=+0.131998553 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent)
Nov 29 10:22:07 np0005539860 podman[236446]: 2025-11-29 15:22:07.739954443 +0000 UTC m=+0.173772659 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 10:22:08 np0005539860 podman[236586]: 2025-11-29 15:22:08.254749753 +0000 UTC m=+0.119921689 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 10:22:08 np0005539860 python3.9[236633]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Nov 29 10:22:09 np0005539860 python3.9[236756]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1764429727.6997502-77-100268202582390/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Nov 29 10:22:09 np0005539860 podman[236787]: 2025-11-29 15:22:09.641079913 +0000 UTC m=+0.091990252 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter)
Nov 29 10:22:10 np0005539860 python3.9[236927]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Nov 29 15:22:10 compute-0 systemd[1]: Stopping System Logging Service...
Nov 29 15:22:10 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Nov 29 15:22:10 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Nov 29 15:22:10 compute-0 systemd[1]: Stopped System Logging Service.
Nov 29 15:22:10 compute-0 systemd[1]: rsyslog.service: Consumed 4.711s CPU time, 8.4M memory peak, read 0B from disk, written 6.6M to disk.
Nov 29 15:22:10 compute-0 systemd[1]: Starting System Logging Service...
Nov 29 15:22:10 compute-0 rsyslogd[236931]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236931" x-info="https://www.rsyslog.com"] start
Nov 29 15:22:10 compute-0 systemd[1]: Started System Logging Service.
Nov 29 15:22:10 compute-0 rsyslogd[236931]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 15:22:10 compute-0 rsyslogd[236931]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Nov 29 15:22:10 compute-0 rsyslogd[236931]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Nov 29 15:22:10 compute-0 rsyslogd[236931]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Nov 29 15:22:10 compute-0 rsyslogd[236931]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Nov 29 15:22:11 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Nov 29 15:22:11 compute-0 systemd[1]: session-27.scope: Consumed 12.068s CPU time.
Nov 29 15:22:11 compute-0 systemd-logind[794]: Session 27 logged out. Waiting for processes to exit.
Nov 29 15:22:11 compute-0 systemd-logind[794]: Removed session 27.
Nov 29 15:22:11 compute-0 podman[236960]: 2025-11-29 15:22:11.508424368 +0000 UTC m=+0.117880603 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 15:22:14 compute-0 podman[236977]: 2025-11-29 15:22:14.658994267 +0000 UTC m=+0.110993719 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:22:26 compute-0 podman[237002]: 2025-11-29 15:22:26.63570322 +0000 UTC m=+0.082772123 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:22:29 compute-0 podman[203677]: time="2025-11-29T15:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:22:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:22:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: ERROR   15:22:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: ERROR   15:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: ERROR   15:22:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: ERROR   15:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: ERROR   15:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:22:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:22:35 compute-0 podman[237025]: 2025-11-29 15:22:35.692033237 +0000 UTC m=+0.141283018 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 29 15:22:37 compute-0 podman[237043]: 2025-11-29 15:22:37.683723368 +0000 UTC m=+0.125412124 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:22:38 compute-0 systemd-logind[794]: New session 28 of user zuul.
Nov 29 15:22:38 compute-0 systemd[1]: Started Session 28 of User zuul.
Nov 29 15:22:38 compute-0 podman[237064]: 2025-11-29 15:22:38.383584275 +0000 UTC m=+0.087701114 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent)
Nov 29 15:22:38 compute-0 podman[237067]: 2025-11-29 15:22:38.396926812 +0000 UTC m=+0.094788135 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, container_name=kepler, name=ubi9)
Nov 29 15:22:38 compute-0 podman[237066]: 2025-11-29 15:22:38.452766655 +0000 UTC m=+0.142563552 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Nov 29 15:22:39 compute-0 python3[237299]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 15:22:39 compute-0 podman[237310]: 2025-11-29 15:22:39.883779318 +0000 UTC m=+0.084155960 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc.)
Nov 29 15:22:41 compute-0 podman[237513]: 2025-11-29 15:22:41.719561711 +0000 UTC m=+0.138270967 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 15:22:41 compute-0 python3[237557]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:22:43 compute-0 python3[237711]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:22:44 compute-0 podman[237737]: 2025-11-29 15:22:44.793569913 +0000 UTC m=+0.082106375 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:22:45 compute-0 python3[237885]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Nov 29 15:22:47 compute-0 python3[238038]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Nov 29 15:22:49 compute-0 nova_compute[189485]: 2025-11-29 15:22:49.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:49 compute-0 nova_compute[189485]: 2025-11-29 15:22:49.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:22:49 compute-0 nova_compute[189485]: 2025-11-29 15:22:49.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:22:49 compute-0 nova_compute[189485]: 2025-11-29 15:22:49.515 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:22:49 compute-0 python3[238264]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:22:50 compute-0 nova_compute[189485]: 2025-11-29 15:22:50.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:51 compute-0 python3[238428]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.481 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.522 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.986 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.988 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5642MB free_disk=72.43603897094727GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.988 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:22:52 compute-0 nova_compute[189485]: 2025-11-29 15:22:52.989 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:22:53 compute-0 nova_compute[189485]: 2025-11-29 15:22:53.098 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:22:53 compute-0 nova_compute[189485]: 2025-11-29 15:22:53.098 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:22:53 compute-0 nova_compute[189485]: 2025-11-29 15:22:53.151 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:22:53 compute-0 nova_compute[189485]: 2025-11-29 15:22:53.177 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:22:53 compute-0 nova_compute[189485]: 2025-11-29 15:22:53.179 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:22:53 compute-0 nova_compute[189485]: 2025-11-29 15:22:53.179 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:22:54 compute-0 nova_compute[189485]: 2025-11-29 15:22:54.184 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:54 compute-0 nova_compute[189485]: 2025-11-29 15:22:54.481 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:54 compute-0 nova_compute[189485]: 2025-11-29 15:22:54.513 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:55 compute-0 nova_compute[189485]: 2025-11-29 15:22:55.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:56 compute-0 nova_compute[189485]: 2025-11-29 15:22:56.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:22:56 compute-0 nova_compute[189485]: 2025-11-29 15:22:56.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:22:57 compute-0 podman[238467]: 2025-11-29 15:22:57.663911385 +0000 UTC m=+0.094093007 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:22:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:22:59.147 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:22:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:22:59.148 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:22:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:22:59.148 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:22:59 compute-0 podman[203677]: time="2025-11-29T15:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:22:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:22:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4279 "" "Go-http-client/1.1"
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: ERROR   15:23:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: ERROR   15:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: ERROR   15:23:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: ERROR   15:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: ERROR   15:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:23:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:23:06 compute-0 podman[238488]: 2025-11-29 15:23:06.692799807 +0000 UTC m=+0.134193568 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 15:23:08 compute-0 podman[238510]: 2025-11-29 15:23:08.671427519 +0000 UTC m=+0.113824923 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_id=edpm, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, version=9.4, io.openshift.expose-services=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 29 15:23:08 compute-0 podman[238512]: 2025-11-29 15:23:08.677491592 +0000 UTC m=+0.121875299 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 29 15:23:08 compute-0 podman[238511]: 2025-11-29 15:23:08.683502882 +0000 UTC m=+0.119635098 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 15:23:08 compute-0 podman[238513]: 2025-11-29 15:23:08.70809288 +0000 UTC m=+0.138351560 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:23:10 compute-0 podman[238589]: 2025-11-29 15:23:10.622702219 +0000 UTC m=+0.074720988 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64)
Nov 29 15:23:12 compute-0 podman[238610]: 2025-11-29 15:23:12.678057441 +0000 UTC m=+0.122585058 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 15:23:15 compute-0 podman[238628]: 2025-11-29 15:23:15.694430123 +0000 UTC m=+0.135536894 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:23:28 compute-0 podman[238652]: 2025-11-29 15:23:28.672640469 +0000 UTC m=+0.111429011 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:23:29 compute-0 podman[203677]: time="2025-11-29T15:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:23:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:23:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4274 "" "Go-http-client/1.1"
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: ERROR   15:23:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: ERROR   15:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: ERROR   15:23:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: ERROR   15:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: ERROR   15:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:23:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:23:37 compute-0 podman[238675]: 2025-11-29 15:23:37.644630851 +0000 UTC m=+0.085754556 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Nov 29 15:23:39 compute-0 podman[238694]: 2025-11-29 15:23:39.640059811 +0000 UTC m=+0.085705405 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, architecture=x86_64)
Nov 29 15:23:39 compute-0 podman[238695]: 2025-11-29 15:23:39.655234228 +0000 UTC m=+0.102642990 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:23:39 compute-0 podman[238696]: 2025-11-29 15:23:39.666969041 +0000 UTC m=+0.105392117 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:23:39 compute-0 podman[238697]: 2025-11-29 15:23:39.696185734 +0000 UTC m=+0.135144414 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:23:41 compute-0 podman[238768]: 2025-11-29 15:23:41.658001481 +0000 UTC m=+0.089581982 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, architecture=x86_64, io.openshift.expose-services=, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7)
Nov 29 15:23:43 compute-0 podman[238787]: 2025-11-29 15:23:43.660369061 +0000 UTC m=+0.096522812 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:23:46 compute-0 podman[238807]: 2025-11-29 15:23:46.635463111 +0000 UTC m=+0.077945842 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:23:49 compute-0 nova_compute[189485]: 2025-11-29 15:23:49.491 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:49 compute-0 nova_compute[189485]: 2025-11-29 15:23:49.492 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:23:49 compute-0 nova_compute[189485]: 2025-11-29 15:23:49.492 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:23:49 compute-0 nova_compute[189485]: 2025-11-29 15:23:49.519 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:23:50 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Nov 29 15:23:50 compute-0 systemd[1]: session-28.scope: Consumed 10.792s CPU time.
Nov 29 15:23:50 compute-0 systemd-logind[794]: Session 28 logged out. Waiting for processes to exit.
Nov 29 15:23:50 compute-0 systemd-logind[794]: Removed session 28.
Nov 29 15:23:51 compute-0 nova_compute[189485]: 2025-11-29 15:23:51.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:53 compute-0 nova_compute[189485]: 2025-11-29 15:23:53.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:53 compute-0 nova_compute[189485]: 2025-11-29 15:23:53.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.526 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.925 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.927 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5713MB free_disk=72.43629837036133GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.927 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.928 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.995 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:23:54 compute-0 nova_compute[189485]: 2025-11-29 15:23:54.996 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:23:55 compute-0 nova_compute[189485]: 2025-11-29 15:23:55.025 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:23:55 compute-0 nova_compute[189485]: 2025-11-29 15:23:55.041 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:23:55 compute-0 nova_compute[189485]: 2025-11-29 15:23:55.044 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:23:55 compute-0 nova_compute[189485]: 2025-11-29 15:23:55.045 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:23:58 compute-0 nova_compute[189485]: 2025-11-29 15:23:58.047 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:58 compute-0 nova_compute[189485]: 2025-11-29 15:23:58.047 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:23:58 compute-0 nova_compute[189485]: 2025-11-29 15:23:58.048 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:23:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:23:59.150 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:23:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:23:59.150 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:23:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:23:59.151 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:23:59 compute-0 podman[238832]: 2025-11-29 15:23:59.66988504 +0000 UTC m=+0.108005687 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:23:59 compute-0 podman[203677]: time="2025-11-29T15:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:23:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:23:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4283 "" "Go-http-client/1.1"
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.047 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.048 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.051 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.055 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.068 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.069 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.070 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:24:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: ERROR   15:24:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: ERROR   15:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: ERROR   15:24:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: ERROR   15:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: ERROR   15:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:24:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:24:08 compute-0 podman[238857]: 2025-11-29 15:24:08.647230981 +0000 UTC m=+0.099942357 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:24:10 compute-0 podman[238877]: 2025-11-29 15:24:10.672321456 +0000 UTC m=+0.111714069 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 15:24:10 compute-0 podman[238876]: 2025-11-29 15:24:10.676405108 +0000 UTC m=+0.111151063 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc.)
Nov 29 15:24:10 compute-0 podman[238878]: 2025-11-29 15:24:10.694387413 +0000 UTC m=+0.116929564 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:24:10 compute-0 podman[238884]: 2025-11-29 15:24:10.713761665 +0000 UTC m=+0.126848496 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 15:24:12 compute-0 podman[238953]: 2025-11-29 15:24:12.673604457 +0000 UTC m=+0.120446519 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, distribution-scope=public, name=ubi9-minimal, architecture=x86_64, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:24:14 compute-0 podman[238973]: 2025-11-29 15:24:14.66070274 +0000 UTC m=+0.107623197 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 15:24:17 compute-0 podman[238993]: 2025-11-29 15:24:17.669505105 +0000 UTC m=+0.116149781 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:24:29 compute-0 podman[203677]: time="2025-11-29T15:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:24:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:24:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4279 "" "Go-http-client/1.1"
Nov 29 15:24:30 compute-0 podman[239018]: 2025-11-29 15:24:30.666440905 +0000 UTC m=+0.107243046 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: ERROR   15:24:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: ERROR   15:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: ERROR   15:24:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: ERROR   15:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: ERROR   15:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:24:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:24:39 compute-0 podman[239042]: 2025-11-29 15:24:39.65014737 +0000 UTC m=+0.093024756 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:24:41 compute-0 podman[239060]: 2025-11-29 15:24:41.66885676 +0000 UTC m=+0.121166949 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, container_name=kepler, io.buildah.version=1.29.0, version=9.4, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release-0.7.12=, name=ubi9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:24:41 compute-0 podman[239061]: 2025-11-29 15:24:41.670833775 +0000 UTC m=+0.108337518 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 15:24:41 compute-0 podman[239062]: 2025-11-29 15:24:41.710546145 +0000 UTC m=+0.143791141 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 15:24:41 compute-0 podman[239063]: 2025-11-29 15:24:41.727201103 +0000 UTC m=+0.158030232 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 15:24:43 compute-0 podman[239140]: 2025-11-29 15:24:43.655754956 +0000 UTC m=+0.102157247 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git)
Nov 29 15:24:44 compute-0 podman[239159]: 2025-11-29 15:24:44.801402124 +0000 UTC m=+0.087625277 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 15:24:48 compute-0 podman[239177]: 2025-11-29 15:24:48.660041062 +0000 UTC m=+0.099420472 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:24:50 compute-0 nova_compute[189485]: 2025-11-29 15:24:50.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:50 compute-0 nova_compute[189485]: 2025-11-29 15:24:50.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:24:50 compute-0 nova_compute[189485]: 2025-11-29 15:24:50.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:24:50 compute-0 nova_compute[189485]: 2025-11-29 15:24:50.511 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:24:53 compute-0 nova_compute[189485]: 2025-11-29 15:24:53.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.515 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.516 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.516 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.517 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.861 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.862 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5726MB free_disk=72.43631744384766GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.862 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.863 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.977 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:24:54 compute-0 nova_compute[189485]: 2025-11-29 15:24:54.977 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:24:55 compute-0 nova_compute[189485]: 2025-11-29 15:24:55.007 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:24:55 compute-0 nova_compute[189485]: 2025-11-29 15:24:55.020 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:24:55 compute-0 nova_compute[189485]: 2025-11-29 15:24:55.022 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:24:55 compute-0 nova_compute[189485]: 2025-11-29 15:24:55.023 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:24:56 compute-0 nova_compute[189485]: 2025-11-29 15:24:56.024 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:56 compute-0 nova_compute[189485]: 2025-11-29 15:24:56.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:57 compute-0 nova_compute[189485]: 2025-11-29 15:24:57.478 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:57 compute-0 nova_compute[189485]: 2025-11-29 15:24:57.496 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:57 compute-0 nova_compute[189485]: 2025-11-29 15:24:57.497 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:24:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:24:59.152 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:24:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:24:59.152 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:24:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:24:59.153 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:24:59 compute-0 nova_compute[189485]: 2025-11-29 15:24:59.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:24:59 compute-0 podman[203677]: time="2025-11-29T15:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:24:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:24:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4276 "" "Go-http-client/1.1"
Nov 29 15:25:01 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:01.395 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:25:01 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:01.398 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:25:01 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:01.400 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: ERROR   15:25:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: ERROR   15:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: ERROR   15:25:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: ERROR   15:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: ERROR   15:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:25:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:25:01 compute-0 podman[239202]: 2025-11-29 15:25:01.646595917 +0000 UTC m=+0.089149050 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:25:10 compute-0 podman[239224]: 2025-11-29 15:25:10.706300879 +0000 UTC m=+0.137770865 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 15:25:12 compute-0 podman[239243]: 2025-11-29 15:25:12.668066835 +0000 UTC m=+0.102263400 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Nov 29 15:25:12 compute-0 podman[239245]: 2025-11-29 15:25:12.669784992 +0000 UTC m=+0.093501909 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:25:12 compute-0 podman[239244]: 2025-11-29 15:25:12.684851306 +0000 UTC m=+0.124218524 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 15:25:12 compute-0 podman[239246]: 2025-11-29 15:25:12.727027404 +0000 UTC m=+0.143131682 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:25:14 compute-0 podman[239318]: 2025-11-29 15:25:14.653759288 +0000 UTC m=+0.101015476 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Nov 29 15:25:15 compute-0 podman[239337]: 2025-11-29 15:25:15.679111613 +0000 UTC m=+0.118975319 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 15:25:19 compute-0 podman[239356]: 2025-11-29 15:25:19.671931556 +0000 UTC m=+0.107670327 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:25:29 compute-0 podman[203677]: time="2025-11-29T15:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:25:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:25:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4282 "" "Go-http-client/1.1"
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: ERROR   15:25:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: ERROR   15:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: ERROR   15:25:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: ERROR   15:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: ERROR   15:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:25:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:25:32 compute-0 podman[239379]: 2025-11-29 15:25:32.632812892 +0000 UTC m=+0.078908726 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:25:41 compute-0 podman[239400]: 2025-11-29 15:25:41.648177766 +0000 UTC m=+0.101882963 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Nov 29 15:25:43 compute-0 podman[239420]: 2025-11-29 15:25:43.631263451 +0000 UTC m=+0.076004009 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:25:43 compute-0 podman[239419]: 2025-11-29 15:25:43.649828029 +0000 UTC m=+0.101783470 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., version=9.4, io.openshift.expose-services=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, io.buildah.version=1.29.0, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Nov 29 15:25:43 compute-0 podman[239421]: 2025-11-29 15:25:43.683586594 +0000 UTC m=+0.126209165 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 15:25:43 compute-0 podman[239422]: 2025-11-29 15:25:43.693363286 +0000 UTC m=+0.135688109 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 15:25:44 compute-0 podman[239498]: 2025-11-29 15:25:44.835601315 +0000 UTC m=+0.108448739 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.expose-services=, release=1755695350, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:25:46 compute-0 podman[239517]: 2025-11-29 15:25:46.67585841 +0000 UTC m=+0.116381612 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.schema-version=1.0)
Nov 29 15:25:50 compute-0 podman[239536]: 2025-11-29 15:25:50.668403477 +0000 UTC m=+0.116488064 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:25:52 compute-0 nova_compute[189485]: 2025-11-29 15:25:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:52 compute-0 nova_compute[189485]: 2025-11-29 15:25:52.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:25:52 compute-0 nova_compute[189485]: 2025-11-29 15:25:52.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:25:52 compute-0 nova_compute[189485]: 2025-11-29 15:25:52.510 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:25:55 compute-0 nova_compute[189485]: 2025-11-29 15:25:55.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.478 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.525 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.525 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:25:56 compute-0 nova_compute[189485]: 2025-11-29 15:25:56.525 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.051 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.053 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5700MB free_disk=72.43640518188477GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.054 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.054 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.147 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.147 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:25:57 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:57.175 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:25:57 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:57.177 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.178 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.206 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.208 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:25:57 compute-0 nova_compute[189485]: 2025-11-29 15:25:57.209 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:25:58 compute-0 nova_compute[189485]: 2025-11-29 15:25:58.210 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:58 compute-0 nova_compute[189485]: 2025-11-29 15:25:58.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:58 compute-0 nova_compute[189485]: 2025-11-29 15:25:58.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:25:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:59.153 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:25:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:59.154 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:25:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:59.155 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:25:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:25:59.178 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:25:59 compute-0 nova_compute[189485]: 2025-11-29 15:25:59.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:25:59 compute-0 podman[203677]: time="2025-11-29T15:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:25:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:25:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4276 "" "Go-http-client/1.1"
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.047 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.048 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.048 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.050 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.051 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.053 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.054 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3fa840>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.057 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.058 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.059 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.060 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.061 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:26:01.062 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: ERROR   15:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: ERROR   15:26:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: ERROR   15:26:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: ERROR   15:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: ERROR   15:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:26:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:26:03 compute-0 podman[239559]: 2025-11-29 15:26:03.668125186 +0000 UTC m=+0.116129836 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.529 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.530 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.561 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.687 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.688 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.701 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.702 189489 INFO nova.compute.claims [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.824 189489 DEBUG nova.compute.provider_tree [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.840 189489 DEBUG nova.scheduler.client.report [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.863 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.864 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.919 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.920 189489 DEBUG nova.network.neutron [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.953 189489 INFO nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:26:08 compute-0 nova_compute[189485]: 2025-11-29 15:26:08.991 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.114 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.117 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.118 189489 INFO nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Creating image(s)#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.119 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.120 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.122 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.123 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a7996d50170914c9415f43103aca35ccc26834bd" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:09 compute-0 nova_compute[189485]: 2025-11-29 15:26:09.124 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.329 189489 WARNING oslo_policy.policy [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.330 189489 WARNING oslo_policy.policy [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.607 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.705 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.part --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.708 189489 DEBUG nova.virt.images [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] a4b79580-904f-4527-8cf1-3888cf1ff785 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.711 189489 DEBUG nova.privsep.utils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.712 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.part /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.929 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.part /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.converted" returned: 0 in 0.217s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:10 compute-0 nova_compute[189485]: 2025-11-29 15:26:10.940 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.039 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd.converted --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.042 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.918s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.069 189489 INFO oslo.privsep.daemon [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpdefk0j_b/privsep.sock']#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.790 189489 INFO oslo.privsep.daemon [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.649 239607 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.653 239607 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.655 239607 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.655 239607 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239607#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.905 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.957 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.958 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a7996d50170914c9415f43103aca35ccc26834bd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.959 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:11 compute-0 nova_compute[189485]: 2025-11-29 15:26:11.969 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.021 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.052s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.022 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.063 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.064 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.065 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.128 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.129 189489 DEBUG nova.virt.disk.api [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Checking if we can resize image /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.129 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.208 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.209 189489 DEBUG nova.virt.disk.api [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Cannot resize image /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.210 189489 DEBUG nova.objects.instance [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'migration_context' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.241 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.242 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.243 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.244 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.246 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.247 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.272 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.025s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.273 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.307 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.308 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.062s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.337 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.434 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.435 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.436 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.451 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.488 189489 DEBUG nova.network.neutron [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Successfully created port: 71c1eec4-610d-4d07-b3d3-b94428ea07fc _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.548 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.549 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.595 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.597 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.598 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.657 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.658 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.659 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Ensure instance console log exists: /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.660 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.660 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:12 compute-0 nova_compute[189485]: 2025-11-29 15:26:12.661 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:12 compute-0 podman[239632]: 2025-11-29 15:26:12.664218413 +0000 UTC m=+0.118484438 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:26:13 compute-0 nova_compute[189485]: 2025-11-29 15:26:13.706 189489 DEBUG nova.network.neutron [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Successfully updated port: 71c1eec4-610d-4d07-b3d3-b94428ea07fc _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:26:13 compute-0 nova_compute[189485]: 2025-11-29 15:26:13.729 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:26:13 compute-0 nova_compute[189485]: 2025-11-29 15:26:13.730 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:26:13 compute-0 nova_compute[189485]: 2025-11-29 15:26:13.730 189489 DEBUG nova.network.neutron [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:26:14 compute-0 nova_compute[189485]: 2025-11-29 15:26:14.268 189489 DEBUG nova.compute.manager [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-changed-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:26:14 compute-0 nova_compute[189485]: 2025-11-29 15:26:14.269 189489 DEBUG nova.compute.manager [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Refreshing instance network info cache due to event network-changed-71c1eec4-610d-4d07-b3d3-b94428ea07fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:26:14 compute-0 nova_compute[189485]: 2025-11-29 15:26:14.269 189489 DEBUG oslo_concurrency.lockutils [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:26:14 compute-0 nova_compute[189485]: 2025-11-29 15:26:14.305 189489 DEBUG nova.network.neutron [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:26:14 compute-0 podman[239661]: 2025-11-29 15:26:14.698508631 +0000 UTC m=+0.115956710 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Nov 29 15:26:14 compute-0 podman[239662]: 2025-11-29 15:26:14.706246059 +0000 UTC m=+0.108622794 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:26:14 compute-0 podman[239660]: 2025-11-29 15:26:14.729739729 +0000 UTC m=+0.143238722 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, name=ubi9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:26:14 compute-0 podman[239663]: 2025-11-29 15:26:14.775463015 +0000 UTC m=+0.170186795 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.463 189489 DEBUG nova.network.neutron [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.511 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.512 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Instance network_info: |[{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.512 189489 DEBUG oslo_concurrency.lockutils [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.512 189489 DEBUG nova.network.neutron [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Refreshing network info cache for port 71c1eec4-610d-4d07-b3d3-b94428ea07fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.516 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Start _get_guest_xml network_info=[{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.529 189489 WARNING nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.542 189489 DEBUG nova.virt.libvirt.host [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.544 189489 DEBUG nova.virt.libvirt.host [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.552 189489 DEBUG nova.virt.libvirt.host [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.553 189489 DEBUG nova.virt.libvirt.host [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.553 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.554 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:24:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='34af94d1-a6e1-4bf0-8957-036dc948fe9d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.554 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.554 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.555 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.555 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.555 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.555 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.555 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.556 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.556 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.556 189489 DEBUG nova.virt.hardware [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.560 189489 DEBUG nova.privsep.utils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.561 189489 DEBUG nova.virt.libvirt.vif [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:26:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-ym8olkg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:26:09Z,user_data=None,user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=b5d60fb8-b63e-4b0a-b908-00453be8ce37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.562 189489 DEBUG nova.network.os_vif_util [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.563 189489 DEBUG nova.network.os_vif_util [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.564 189489 DEBUG nova.objects.instance [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.585 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <uuid>b5d60fb8-b63e-4b0a-b908-00453be8ce37</uuid>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <name>instance-00000001</name>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <memory>524288</memory>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:name>test_0</nova:name>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:26:15</nova:creationTime>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:flavor name="m1.small">
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:memory>512</nova:memory>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:ephemeral>1</nova:ephemeral>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:user uuid="5cbf094e2197487fbe16a0fe6e3076ba">admin</nova:user>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:project uuid="04d676205d9142d19f3d4ce7389f72a2">admin</nova:project>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="a4b79580-904f-4527-8cf1-3888cf1ff785"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        <nova:port uuid="71c1eec4-610d-4d07-b3d3-b94428ea07fc">
Nov 29 15:26:15 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="192.168.0.142" ipVersion="4"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <system>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <entry name="serial">b5d60fb8-b63e-4b0a-b908-00453be8ce37</entry>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <entry name="uuid">b5d60fb8-b63e-4b0a-b908-00453be8ce37</entry>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </system>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <os>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </os>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <features>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </features>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <target dev="vdb" bus="virtio"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.config"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:da:91:00"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <target dev="tap71c1eec4-61"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/console.log" append="off"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <video>
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </video>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:26:15 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:26:15 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:26:15 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:26:15 compute-0 nova_compute[189485]: </domain>
Nov 29 15:26:15 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.585 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Preparing to wait for external event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.585 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.586 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.586 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.586 189489 DEBUG nova.virt.libvirt.vif [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:26:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-ym8olkg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:26:09Z,user_data=None,user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=b5d60fb8-b63e-4b0a-b908-00453be8ce37,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.587 189489 DEBUG nova.network.os_vif_util [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.587 189489 DEBUG nova.network.os_vif_util [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.588 189489 DEBUG os_vif [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.631 189489 DEBUG ovsdbapp.backend.ovs_idl [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.631 189489 DEBUG ovsdbapp.backend.ovs_idl [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.631 189489 DEBUG ovsdbapp.backend.ovs_idl [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.632 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.632 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.633 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.633 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.635 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.638 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.648 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.648 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.648 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:26:15 compute-0 nova_compute[189485]: 2025-11-29 15:26:15.649 189489 INFO oslo.privsep.daemon [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpb208gybr/privsep.sock']#033[00m
Nov 29 15:26:15 compute-0 podman[239739]: 2025-11-29 15:26:15.672996162 +0000 UTC m=+0.120514403 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.465 189489 INFO oslo.privsep.daemon [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.301 239764 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.309 239764 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.313 239764 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.313 239764 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239764#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.814 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.815 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap71c1eec4-61, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.816 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap71c1eec4-61, col_values=(('external_ids', {'iface-id': '71c1eec4-610d-4d07-b3d3-b94428ea07fc', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:da:91:00', 'vm-uuid': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.821 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:26:16 compute-0 NetworkManager[56360]: <info>  [1764429976.8217] manager: (tap71c1eec4-61): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.836 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.837 189489 INFO os_vif [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61')#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.928 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.930 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.931 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.932 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No VIF found with MAC fa:16:3e:da:91:00, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:26:16 compute-0 nova_compute[189485]: 2025-11-29 15:26:16.934 189489 INFO nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Using config drive#033[00m
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.260 189489 DEBUG nova.network.neutron [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated VIF entry in instance network info cache for port 71c1eec4-610d-4d07-b3d3-b94428ea07fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.261 189489 DEBUG nova.network.neutron [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.285 189489 DEBUG oslo_concurrency.lockutils [req-3377f87a-bc98-40a6-96dd-35013da6921a req-dbcad32c-11a8-44cc-bab4-e13d75ef020a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.585 189489 INFO nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Creating config drive at /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.config#033[00m
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.595 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplm6j9ydu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:17 compute-0 podman[239770]: 2025-11-29 15:26:17.659728185 +0000 UTC m=+0.106320892 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.742 189489 DEBUG oslo_concurrency.processutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplm6j9ydu" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:17 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Nov 29 15:26:17 compute-0 kernel: tap71c1eec4-61: entered promiscuous mode
Nov 29 15:26:17 compute-0 NetworkManager[56360]: <info>  [1764429977.8800] manager: (tap71c1eec4-61): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Nov 29 15:26:17 compute-0 ovn_controller[97827]: 2025-11-29T15:26:17Z|00027|binding|INFO|Claiming lport 71c1eec4-610d-4d07-b3d3-b94428ea07fc for this chassis.
Nov 29 15:26:17 compute-0 ovn_controller[97827]: 2025-11-29T15:26:17Z|00028|binding|INFO|71c1eec4-610d-4d07-b3d3-b94428ea07fc: Claiming fa:16:3e:da:91:00 192.168.0.142
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.882 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:17 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:17.898 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:91:00 192.168.0.142'], port_security=['fa:16:3e:da:91:00 192.168.0.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.142/24', 'neutron:device_id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=71c1eec4-610d-4d07-b3d3-b94428ea07fc) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:26:17 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:17.900 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 71c1eec4-610d-4d07-b3d3-b94428ea07fc in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 bound to our chassis#033[00m
Nov 29 15:26:17 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:17.902 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:26:17 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:17.903 106713 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp01tksn0r/privsep.sock']#033[00m
Nov 29 15:26:17 compute-0 systemd-udevd[239812]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:26:17 compute-0 NetworkManager[56360]: <info>  [1764429977.9358] device (tap71c1eec4-61): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:26:17 compute-0 NetworkManager[56360]: <info>  [1764429977.9384] device (tap71c1eec4-61): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:26:17 compute-0 systemd-machined[155802]: New machine qemu-1-instance-00000001.
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.977 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:17 compute-0 nova_compute[189485]: 2025-11-29 15:26:17.984 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:17 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Nov 29 15:26:17 compute-0 ovn_controller[97827]: 2025-11-29T15:26:17Z|00029|binding|INFO|Setting lport 71c1eec4-610d-4d07-b3d3-b94428ea07fc ovn-installed in OVS
Nov 29 15:26:17 compute-0 ovn_controller[97827]: 2025-11-29T15:26:17Z|00030|binding|INFO|Setting lport 71c1eec4-610d-4d07-b3d3-b94428ea07fc up in Southbound
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.000 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.603 106713 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.603 106713 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp01tksn0r/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.482 239830 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.486 239830 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.489 239830 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.489 239830 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239830#033[00m
Nov 29 15:26:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:18.606 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[07371b57-c7a9-4f57-b882-9406fa2a2c3a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.621 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764429978.6205723, b5d60fb8-b63e-4b0a-b908-00453be8ce37 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.622 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] VM Started (Lifecycle Event)#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.672 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.684 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764429978.6209574, b5d60fb8-b63e-4b0a-b908-00453be8ce37 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.684 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.703 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.712 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.734 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.833 189489 DEBUG nova.compute.manager [req-dd23ac29-f441-4947-8470-1a97a3829b5d req-acec80d4-0604-4bc7-87af-420762bff069 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.834 189489 DEBUG oslo_concurrency.lockutils [req-dd23ac29-f441-4947-8470-1a97a3829b5d req-acec80d4-0604-4bc7-87af-420762bff069 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.834 189489 DEBUG oslo_concurrency.lockutils [req-dd23ac29-f441-4947-8470-1a97a3829b5d req-acec80d4-0604-4bc7-87af-420762bff069 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.835 189489 DEBUG oslo_concurrency.lockutils [req-dd23ac29-f441-4947-8470-1a97a3829b5d req-acec80d4-0604-4bc7-87af-420762bff069 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.835 189489 DEBUG nova.compute.manager [req-dd23ac29-f441-4947-8470-1a97a3829b5d req-acec80d4-0604-4bc7-87af-420762bff069 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Processing event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.836 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.843 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764429978.8420691, b5d60fb8-b63e-4b0a-b908-00453be8ce37 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.843 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.845 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.859 189489 INFO nova.virt.libvirt.driver [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Instance spawned successfully.#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.861 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.864 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.871 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.892 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.899 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.900 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.901 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.902 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.903 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.904 189489 DEBUG nova.virt.libvirt.driver [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.958 189489 INFO nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Took 9.84 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:26:18 compute-0 nova_compute[189485]: 2025-11-29 15:26:18.959 189489 DEBUG nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:26:19 compute-0 nova_compute[189485]: 2025-11-29 15:26:19.026 189489 INFO nova.compute.manager [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Took 10.38 seconds to build instance.#033[00m
Nov 29 15:26:19 compute-0 nova_compute[189485]: 2025-11-29 15:26:19.042 189489 DEBUG oslo_concurrency.lockutils [None req-e140f8e5-73c8-47c5-ab39-4e3fdfdfc338 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.087 239830 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.087 239830 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.087 239830 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.613 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[5bbb8397-fff0-498c-af19-f716260a18ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.615 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapfa63adc8-01 in ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.617 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapfa63adc8-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.617 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6f18a3bb-77cc-4e71-b4b0-02ec6cbfb9f3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.620 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[3a57d61a-2fa0-47a2-80fb-2d1ae702dcfe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.649 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[0fdff11e-719e-4f95-bbcf-252dde277373]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.678 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a035df90-7324-4c73-b3de-b7f3a2d00cad]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:19.682 106713 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp51qz1nxh/privsep.sock']#033[00m
Nov 29 15:26:19 compute-0 nova_compute[189485]: 2025-11-29 15:26:19.736 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:19 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 15:26:19 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.405 106713 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.406 106713 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp51qz1nxh/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.277 239871 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.280 239871 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.282 239871 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.283 239871 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239871#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.410 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[1fef887b-4b58-4df1-a976-53ccf44e1c58]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.882 239871 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.882 239871 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:20.882 239871 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:20 compute-0 nova_compute[189485]: 2025-11-29 15:26:20.968 189489 DEBUG nova.compute.manager [req-55461530-97d3-40c9-b11b-de7029c56f4b req-a32cc768-775a-461c-8ead-2032ca373583 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:26:20 compute-0 nova_compute[189485]: 2025-11-29 15:26:20.969 189489 DEBUG oslo_concurrency.lockutils [req-55461530-97d3-40c9-b11b-de7029c56f4b req-a32cc768-775a-461c-8ead-2032ca373583 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:20 compute-0 nova_compute[189485]: 2025-11-29 15:26:20.970 189489 DEBUG oslo_concurrency.lockutils [req-55461530-97d3-40c9-b11b-de7029c56f4b req-a32cc768-775a-461c-8ead-2032ca373583 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:20 compute-0 nova_compute[189485]: 2025-11-29 15:26:20.971 189489 DEBUG oslo_concurrency.lockutils [req-55461530-97d3-40c9-b11b-de7029c56f4b req-a32cc768-775a-461c-8ead-2032ca373583 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:20 compute-0 nova_compute[189485]: 2025-11-29 15:26:20.971 189489 DEBUG nova.compute.manager [req-55461530-97d3-40c9-b11b-de7029c56f4b req-a32cc768-775a-461c-8ead-2032ca373583 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] No waiting events found dispatching network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:26:20 compute-0 nova_compute[189485]: 2025-11-29 15:26:20.972 189489 WARNING nova.compute.manager [req-55461530-97d3-40c9-b11b-de7029c56f4b req-a32cc768-775a-461c-8ead-2032ca373583 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received unexpected event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc for instance with vm_state active and task_state None.#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.453 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[54e318b7-e77c-429a-b650-d62d03511197]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 NetworkManager[56360]: <info>  [1764429981.4971] manager: (tapfa63adc8-00): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.492 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[072cff7d-1e62-43e8-9dfb-27b37f79525d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.546 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[7b0b48c9-1c0c-4c4d-acc2-316b44f3949e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.552 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[be0a5f5c-7595-411b-b2f1-21bd9e7cfeaf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 systemd-udevd[239895]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:26:21 compute-0 NetworkManager[56360]: <info>  [1764429981.5906] device (tapfa63adc8-00): carrier: link connected
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.597 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[18d0ce60-cf2c-4e87-a262-af778261d4c6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.633 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[b9788c9f-9e7e-43ee-8bda-68cc3119605e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 37305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 239905, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.655 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1c1240c6-12d6-4bc9-bbaf-7b6f77116552]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe5d:9e29'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373724, 'tstamp': 373724}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 239916, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.680 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[3312a3fa-268c-4e3b-bcca-86737d47e65f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 37305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 239924, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 podman[239880]: 2025-11-29 15:26:21.692725946 +0000 UTC m=+0.163930076 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.727 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1c3c6b58-baa0-460e-843b-dbfd89457cf5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 nova_compute[189485]: 2025-11-29 15:26:21.819 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.823 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1719ca90-d4ce-4ea8-8d3b-4f6bfdb4e8ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.825 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.826 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.827 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:26:21 compute-0 kernel: tapfa63adc8-00: entered promiscuous mode
Nov 29 15:26:21 compute-0 NetworkManager[56360]: <info>  [1764429981.8327] manager: (tapfa63adc8-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Nov 29 15:26:21 compute-0 nova_compute[189485]: 2025-11-29 15:26:21.829 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.847 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:26:21 compute-0 ovn_controller[97827]: 2025-11-29T15:26:21Z|00031|binding|INFO|Releasing lport e36df9a9-fba2-436d-a18e-320b39f26f3c from this chassis (sb_readonly=0)
Nov 29 15:26:21 compute-0 nova_compute[189485]: 2025-11-29 15:26:21.849 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.859 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/fa63adc8-00c5-408f-a9a0-653db4d11058.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/fa63adc8-00c5-408f-a9a0-653db4d11058.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.861 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[caff68c3-1621-4ec3-b1a9-d620575296e7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.863 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-fa63adc8-00c5-408f-a9a0-653db4d11058
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/fa63adc8-00c5-408f-a9a0-653db4d11058.pid.haproxy
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID fa63adc8-00c5-408f-a9a0-653db4d11058
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:26:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:21.865 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'env', 'PROCESS_TAG=haproxy-fa63adc8-00c5-408f-a9a0-653db4d11058', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/fa63adc8-00c5-408f-a9a0-653db4d11058.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:26:21 compute-0 nova_compute[189485]: 2025-11-29 15:26:21.879 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:22 compute-0 podman[239955]: 2025-11-29 15:26:22.532177756 +0000 UTC m=+0.133402048 container create fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 15:26:22 compute-0 podman[239955]: 2025-11-29 15:26:22.478819266 +0000 UTC m=+0.080043578 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:26:22 compute-0 systemd[1]: Started libpod-conmon-fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37.scope.
Nov 29 15:26:22 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:26:22 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/49f375938944383c0096ed8219c0486165ec32a17e7708e1f5528067da92808b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:26:22 compute-0 podman[239955]: 2025-11-29 15:26:22.640728046 +0000 UTC m=+0.241952368 container init fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:26:22 compute-0 podman[239955]: 2025-11-29 15:26:22.647462257 +0000 UTC m=+0.248686539 container start fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 15:26:22 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [NOTICE]   (239973) : New worker (239975) forked
Nov 29 15:26:22 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [NOTICE]   (239973) : Loading success.
Nov 29 15:26:24 compute-0 nova_compute[189485]: 2025-11-29 15:26:24.741 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:26 compute-0 nova_compute[189485]: 2025-11-29 15:26:26.823 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:29 compute-0 nova_compute[189485]: 2025-11-29 15:26:29.746 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:29 compute-0 podman[203677]: time="2025-11-29T15:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:26:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:26:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4757 "" "Go-http-client/1.1"
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: ERROR   15:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: ERROR   15:26:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: ERROR   15:26:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: ERROR   15:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: ERROR   15:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:26:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:26:31 compute-0 nova_compute[189485]: 2025-11-29 15:26:31.825 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:34 compute-0 podman[239985]: 2025-11-29 15:26:34.63589486 +0000 UTC m=+0.080527031 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:26:34 compute-0 nova_compute[189485]: 2025-11-29 15:26:34.748 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:36 compute-0 ovn_controller[97827]: 2025-11-29T15:26:36Z|00032|binding|INFO|Releasing lport e36df9a9-fba2-436d-a18e-320b39f26f3c from this chassis (sb_readonly=0)
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3167] manager: (patch-provnet-902f0f77-8c45-4eff-be74-67c45c992175-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.316 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3230] device (patch-provnet-902f0f77-8c45-4eff-be74-67c45c992175-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3335] manager: (patch-br-int-to-provnet-902f0f77-8c45-4eff-be74-67c45c992175): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3385] device (patch-br-int-to-provnet-902f0f77-8c45-4eff-be74-67c45c992175)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Nov 29 15:26:36 compute-0 ovn_controller[97827]: 2025-11-29T15:26:36Z|00033|binding|INFO|Releasing lport e36df9a9-fba2-436d-a18e-320b39f26f3c from this chassis (sb_readonly=0)
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.344 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3505] manager: (patch-br-int-to-provnet-902f0f77-8c45-4eff-be74-67c45c992175): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3586] manager: (patch-provnet-902f0f77-8c45-4eff-be74-67c45c992175-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.359 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3650] device (patch-provnet-902f0f77-8c45-4eff-be74-67c45c992175-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 15:26:36 compute-0 NetworkManager[56360]: <info>  [1764429996.3698] device (patch-br-int-to-provnet-902f0f77-8c45-4eff-be74-67c45c992175)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.633 189489 DEBUG nova.compute.manager [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-changed-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.635 189489 DEBUG nova.compute.manager [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Refreshing instance network info cache due to event network-changed-71c1eec4-610d-4d07-b3d3-b94428ea07fc. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.636 189489 DEBUG oslo_concurrency.lockutils [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.638 189489 DEBUG oslo_concurrency.lockutils [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.639 189489 DEBUG nova.network.neutron [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Refreshing network info cache for port 71c1eec4-610d-4d07-b3d3-b94428ea07fc _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:26:36 compute-0 nova_compute[189485]: 2025-11-29 15:26:36.829 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:39 compute-0 nova_compute[189485]: 2025-11-29 15:26:39.753 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:41 compute-0 nova_compute[189485]: 2025-11-29 15:26:41.268 189489 DEBUG nova.network.neutron [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated VIF entry in instance network info cache for port 71c1eec4-610d-4d07-b3d3-b94428ea07fc. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:26:41 compute-0 nova_compute[189485]: 2025-11-29 15:26:41.269 189489 DEBUG nova.network.neutron [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:26:41 compute-0 nova_compute[189485]: 2025-11-29 15:26:41.293 189489 DEBUG oslo_concurrency.lockutils [req-8dae5116-1f87-43d4-9908-e7a9726f3d94 req-351973b2-000b-4709-88de-08546529947d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:26:41 compute-0 nova_compute[189485]: 2025-11-29 15:26:41.835 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:43 compute-0 podman[240011]: 2025-11-29 15:26:43.689157338 +0000 UTC m=+0.123664607 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:26:44 compute-0 nova_compute[189485]: 2025-11-29 15:26:44.758 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:44 compute-0 podman[240031]: 2025-11-29 15:26:44.881615643 +0000 UTC m=+0.107881814 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:26:44 compute-0 podman[240032]: 2025-11-29 15:26:44.904124006 +0000 UTC m=+0.137351804 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:26:44 compute-0 podman[240033]: 2025-11-29 15:26:44.925478479 +0000 UTC m=+0.133316026 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, architecture=x86_64, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release=1214.1726694543, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:26:45 compute-0 podman[240081]: 2025-11-29 15:26:45.054426967 +0000 UTC m=+0.154323260 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible)
Nov 29 15:26:46 compute-0 podman[240107]: 2025-11-29 15:26:46.68053389 +0000 UTC m=+0.125385934 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.openshift.tags=minimal rhel9, config_id=edpm, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:26:46 compute-0 nova_compute[189485]: 2025-11-29 15:26:46.840 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:48 compute-0 podman[240129]: 2025-11-29 15:26:48.652910838 +0000 UTC m=+0.109194119 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:26:49 compute-0 nova_compute[189485]: 2025-11-29 15:26:49.762 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:51 compute-0 ovn_controller[97827]: 2025-11-29T15:26:51Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:da:91:00 192.168.0.142
Nov 29 15:26:51 compute-0 ovn_controller[97827]: 2025-11-29T15:26:51Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:da:91:00 192.168.0.142
Nov 29 15:26:51 compute-0 nova_compute[189485]: 2025-11-29 15:26:51.845 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:52 compute-0 nova_compute[189485]: 2025-11-29 15:26:52.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:52 compute-0 podman[240159]: 2025-11-29 15:26:52.651537728 +0000 UTC m=+0.092649726 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:26:54 compute-0 nova_compute[189485]: 2025-11-29 15:26:54.507 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:54 compute-0 nova_compute[189485]: 2025-11-29 15:26:54.508 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:26:54 compute-0 nova_compute[189485]: 2025-11-29 15:26:54.508 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:26:54 compute-0 nova_compute[189485]: 2025-11-29 15:26:54.763 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:55 compute-0 nova_compute[189485]: 2025-11-29 15:26:55.294 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:26:55 compute-0 nova_compute[189485]: 2025-11-29 15:26:55.295 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:26:55 compute-0 nova_compute[189485]: 2025-11-29 15:26:55.295 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:26:55 compute-0 nova_compute[189485]: 2025-11-29 15:26:55.296 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:26:56 compute-0 nova_compute[189485]: 2025-11-29 15:26:56.849 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.315 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.343 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.344 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.345 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.346 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.347 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.372 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.374 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.375 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.375 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.496 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.568 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.570 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.658 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.660 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.724 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.724 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:26:57 compute-0 nova_compute[189485]: 2025-11-29 15:26:57.817 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.193 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.195 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5248MB free_disk=72.38359451293945GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.195 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.195 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.489 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.490 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.490 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.571 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.682 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.683 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.701 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.743 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.813 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.883 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updated inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.884 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.884 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.924 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.924 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.729s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.925 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.925 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.942 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.943 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:58 compute-0 nova_compute[189485]: 2025-11-29 15:26:58.943 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:26:59 compute-0 nova_compute[189485]: 2025-11-29 15:26:59.094 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:59 compute-0 nova_compute[189485]: 2025-11-29 15:26:59.095 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:59.155 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:26:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:59.156 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:26:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:26:59.157 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:26:59 compute-0 nova_compute[189485]: 2025-11-29 15:26:59.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:59 compute-0 nova_compute[189485]: 2025-11-29 15:26:59.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:26:59 compute-0 podman[203677]: time="2025-11-29T15:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:26:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:26:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4756 "" "Go-http-client/1.1"
Nov 29 15:26:59 compute-0 nova_compute[189485]: 2025-11-29 15:26:59.766 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:00 compute-0 nova_compute[189485]: 2025-11-29 15:27:00.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:00 compute-0 nova_compute[189485]: 2025-11-29 15:27:00.543 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:00 compute-0 nova_compute[189485]: 2025-11-29 15:27:00.543 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: ERROR   15:27:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: ERROR   15:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: ERROR   15:27:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: ERROR   15:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: ERROR   15:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:27:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:27:01 compute-0 nova_compute[189485]: 2025-11-29 15:27:01.853 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:03 compute-0 nova_compute[189485]: 2025-11-29 15:27:03.225 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:03 compute-0 nova_compute[189485]: 2025-11-29 15:27:03.250 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:27:03 compute-0 nova_compute[189485]: 2025-11-29 15:27:03.252 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:03 compute-0 nova_compute[189485]: 2025-11-29 15:27:03.253 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:03 compute-0 nova_compute[189485]: 2025-11-29 15:27:03.300 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:04 compute-0 nova_compute[189485]: 2025-11-29 15:27:04.769 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:05 compute-0 podman[240196]: 2025-11-29 15:27:05.693122758 +0000 UTC m=+0.132205565 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:27:06 compute-0 ovn_controller[97827]: 2025-11-29T15:27:06Z|00034|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Nov 29 15:27:06 compute-0 nova_compute[189485]: 2025-11-29 15:27:06.856 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:09 compute-0 nova_compute[189485]: 2025-11-29 15:27:09.771 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:11 compute-0 nova_compute[189485]: 2025-11-29 15:27:11.861 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:14 compute-0 podman[240220]: 2025-11-29 15:27:14.660012017 +0000 UTC m=+0.105278224 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 15:27:14 compute-0 nova_compute[189485]: 2025-11-29 15:27:14.777 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:15 compute-0 podman[240240]: 2025-11-29 15:27:15.632943256 +0000 UTC m=+0.079728359 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, name=ubi9, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, com.redhat.component=ubi9-container)
Nov 29 15:27:15 compute-0 podman[240241]: 2025-11-29 15:27:15.644792765 +0000 UTC m=+0.082247317 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:27:15 compute-0 podman[240242]: 2025-11-29 15:27:15.678739505 +0000 UTC m=+0.103183999 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 15:27:15 compute-0 podman[240243]: 2025-11-29 15:27:15.729693161 +0000 UTC m=+0.150078656 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:27:16 compute-0 nova_compute[189485]: 2025-11-29 15:27:16.865 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:17 compute-0 podman[240318]: 2025-11-29 15:27:17.692393059 +0000 UTC m=+0.128929878 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=edpm, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Nov 29 15:27:19 compute-0 podman[240339]: 2025-11-29 15:27:19.715294641 +0000 UTC m=+0.149836629 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd)
Nov 29 15:27:19 compute-0 nova_compute[189485]: 2025-11-29 15:27:19.779 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:21 compute-0 nova_compute[189485]: 2025-11-29 15:27:21.870 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:23 compute-0 podman[240367]: 2025-11-29 15:27:23.638681723 +0000 UTC m=+0.079141043 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:27:24 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:24.308 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:27:24 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:24.309 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:27:24 compute-0 nova_compute[189485]: 2025-11-29 15:27:24.309 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:24 compute-0 nova_compute[189485]: 2025-11-29 15:27:24.782 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:26 compute-0 nova_compute[189485]: 2025-11-29 15:27:26.875 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:29 compute-0 podman[203677]: time="2025-11-29T15:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.754 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.755 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:27:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4761 "" "Go-http-client/1.1"
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.777 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.785 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.872 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.872 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.884 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:27:29 compute-0 nova_compute[189485]: 2025-11-29 15:27:29.884 189489 INFO nova.compute.claims [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.040 189489 DEBUG nova.compute.provider_tree [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.059 189489 DEBUG nova.scheduler.client.report [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.084 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.211s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.085 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.173 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.174 189489 DEBUG nova.network.neutron [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.202 189489 INFO nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.249 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:27:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:30.312 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.342 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.344 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.344 189489 INFO nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Creating image(s)#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.346 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.346 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.348 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.374 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.459 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.460 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a7996d50170914c9415f43103aca35ccc26834bd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.461 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.485 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.543 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.545 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.592 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.593 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.594 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.676 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.679 189489 DEBUG nova.virt.disk.api [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Checking if we can resize image /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.679 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.744 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.745 189489 DEBUG nova.virt.disk.api [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Cannot resize image /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.745 189489 DEBUG nova.objects.instance [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 940da983-04c4-46c2-8cd4-96ce0736a67e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.765 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.765 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.766 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.786 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.845 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.847 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.848 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.871 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.941 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.943 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.985 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.986 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:30 compute-0 nova_compute[189485]: 2025-11-29 15:27:30.987 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.046 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.047 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.048 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Ensure instance console log exists: /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.048 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.049 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.049 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: ERROR   15:27:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: ERROR   15:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: ERROR   15:27:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: ERROR   15:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: ERROR   15:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:27:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:27:31 compute-0 nova_compute[189485]: 2025-11-29 15:27:31.879 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.070 189489 DEBUG nova.network.neutron [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Successfully updated port: 7a530c9e-4765-4cce-b971-8ebbcff0880f _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.093 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.093 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.093 189489 DEBUG nova.network.neutron [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.170 189489 DEBUG nova.compute.manager [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-changed-7a530c9e-4765-4cce-b971-8ebbcff0880f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.170 189489 DEBUG nova.compute.manager [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Refreshing instance network info cache due to event network-changed-7a530c9e-4765-4cce-b971-8ebbcff0880f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.171 189489 DEBUG oslo_concurrency.lockutils [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.253 189489 DEBUG nova.network.neutron [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:27:34 compute-0 nova_compute[189485]: 2025-11-29 15:27:34.789 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.046 189489 DEBUG nova.network.neutron [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.074 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.075 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance network_info: |[{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.077 189489 DEBUG oslo_concurrency.lockutils [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.077 189489 DEBUG nova.network.neutron [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Refreshing network info cache for port 7a530c9e-4765-4cce-b971-8ebbcff0880f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.084 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Start _get_guest_xml network_info=[{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.105 189489 WARNING nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.115 189489 DEBUG nova.virt.libvirt.host [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.116 189489 DEBUG nova.virt.libvirt.host [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.125 189489 DEBUG nova.virt.libvirt.host [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.127 189489 DEBUG nova.virt.libvirt.host [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.128 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.129 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:24:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='34af94d1-a6e1-4bf0-8957-036dc948fe9d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.131 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.132 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.132 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.133 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.134 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.135 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.136 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.136 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.137 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.138 189489 DEBUG nova.virt.hardware [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.146 189489 DEBUG nova.virt.libvirt.vif [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:27:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm',id=2,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-1c17o8s3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:27:30Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzAzMzM5MzQxNzY2NTgzODg0Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 29 15:27:36 compute-0 nova_compute[189485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzAzMzM5MzQxNzY2NTgzODg0Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=940da983-04c4-46c2-8cd4-96ce0736a67e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.147 189489 DEBUG nova.network.os_vif_util [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.148 189489 DEBUG nova.network.os_vif_util [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.150 189489 DEBUG nova.objects.instance [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 940da983-04c4-46c2-8cd4-96ce0736a67e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.166 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <uuid>940da983-04c4-46c2-8cd4-96ce0736a67e</uuid>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <name>instance-00000002</name>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <memory>524288</memory>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:name>vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm</nova:name>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:27:36</nova:creationTime>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:flavor name="m1.small">
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:memory>512</nova:memory>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:ephemeral>1</nova:ephemeral>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:user uuid="5cbf094e2197487fbe16a0fe6e3076ba">admin</nova:user>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:project uuid="04d676205d9142d19f3d4ce7389f72a2">admin</nova:project>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="a4b79580-904f-4527-8cf1-3888cf1ff785"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        <nova:port uuid="7a530c9e-4765-4cce-b971-8ebbcff0880f">
Nov 29 15:27:36 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="192.168.0.24" ipVersion="4"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <system>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <entry name="serial">940da983-04c4-46c2-8cd4-96ce0736a67e</entry>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <entry name="uuid">940da983-04c4-46c2-8cd4-96ce0736a67e</entry>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </system>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <os>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </os>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <features>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </features>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <target dev="vdb" bus="virtio"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.config"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:56:61:08"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <target dev="tap7a530c9e-47"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/console.log" append="off"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <video>
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </video>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:27:36 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:27:36 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:27:36 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:27:36 compute-0 nova_compute[189485]: </domain>
Nov 29 15:27:36 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.166 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Preparing to wait for external event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.167 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.167 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.167 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.168 189489 DEBUG nova.virt.libvirt.vif [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:27:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm',id=2,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-1c17o8s3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:27:30Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzAzMzM5MzQxNzY2NTgzODg0Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 29 15:27:36 compute-0 nova_compute[189485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzAzMzM5MzQxNzY2NTgzODg0Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=940da983-04c4-46c2-8cd4-96ce0736a67e,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.168 189489 DEBUG nova.network.os_vif_util [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.169 189489 DEBUG nova.network.os_vif_util [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.169 189489 DEBUG os_vif [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.170 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.170 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.171 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.176 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.176 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7a530c9e-47, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.176 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7a530c9e-47, col_values=(('external_ids', {'iface-id': '7a530c9e-4765-4cce-b971-8ebbcff0880f', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:56:61:08', 'vm-uuid': '940da983-04c4-46c2-8cd4-96ce0736a67e'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:36 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:27:36.146 189489 DEBUG nova.virt.libvirt.vif [None req-749820f3-8d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.179 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:36 compute-0 NetworkManager[56360]: <info>  [1764430056.1820] manager: (tap7a530c9e-47): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.181 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.196 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.201 189489 INFO os_vif [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47')#033[00m
Nov 29 15:27:36 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:27:36.168 189489 DEBUG nova.virt.libvirt.vif [None req-749820f3-8d [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.287 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.288 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.289 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.289 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No VIF found with MAC fa:16:3e:56:61:08, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:27:36 compute-0 nova_compute[189485]: 2025-11-29 15:27:36.290 189489 INFO nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Using config drive#033[00m
Nov 29 15:27:36 compute-0 podman[240420]: 2025-11-29 15:27:36.669618778 +0000 UTC m=+0.112112187 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.342 189489 INFO nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Creating config drive at /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.config#033[00m
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.348 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zc22bu_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.488 189489 DEBUG oslo_concurrency.processutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp7zc22bu_" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:37 compute-0 kernel: tap7a530c9e-47: entered promiscuous mode
Nov 29 15:27:37 compute-0 NetworkManager[56360]: <info>  [1764430057.5754] manager: (tap7a530c9e-47): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Nov 29 15:27:37 compute-0 ovn_controller[97827]: 2025-11-29T15:27:37Z|00035|binding|INFO|Claiming lport 7a530c9e-4765-4cce-b971-8ebbcff0880f for this chassis.
Nov 29 15:27:37 compute-0 ovn_controller[97827]: 2025-11-29T15:27:37Z|00036|binding|INFO|7a530c9e-4765-4cce-b971-8ebbcff0880f: Claiming fa:16:3e:56:61:08 192.168.0.24
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.578 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.587 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:08 192.168.0.24'], port_security=['fa:16:3e:56:61:08 192.168.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nju3ymh64jso-rpmxigkbvqy5-bmxqrfirgt4s-port-xtgikmozjmyk', 'neutron:cidrs': '192.168.0.24/24', 'neutron:device_id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nju3ymh64jso-rpmxigkbvqy5-bmxqrfirgt4s-port-xtgikmozjmyk', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=7a530c9e-4765-4cce-b971-8ebbcff0880f) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.588 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 7a530c9e-4765-4cce-b971-8ebbcff0880f in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 bound to our chassis#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.590 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:27:37 compute-0 ovn_controller[97827]: 2025-11-29T15:27:37Z|00037|binding|INFO|Setting lport 7a530c9e-4765-4cce-b971-8ebbcff0880f ovn-installed in OVS
Nov 29 15:27:37 compute-0 ovn_controller[97827]: 2025-11-29T15:27:37Z|00038|binding|INFO|Setting lport 7a530c9e-4765-4cce-b971-8ebbcff0880f up in Southbound
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.595 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.598 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.612 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[745f6f01-60be-48bb-92fe-edebbb337269]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:27:37 compute-0 systemd-machined[155802]: New machine qemu-2-instance-00000002.
Nov 29 15:27:37 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.649 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[26d02b1e-b1c9-4ee3-9470-02543a807859]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.653 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[4fe4b6f5-e5d5-4f31-8b72-fffe2f913e37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:27:37 compute-0 systemd-udevd[240468]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.692 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[3071bf1d-b887-4564-a8bb-ab9605847bfd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:27:37 compute-0 NetworkManager[56360]: <info>  [1764430057.6958] device (tap7a530c9e-47): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:27:37 compute-0 NetworkManager[56360]: <info>  [1764430057.6964] device (tap7a530c9e-47): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.722 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[185ccb21-555f-4348-bd27-a09a51b47bd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 37305, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240473, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.747 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[e3462aa4-79cc-462d-a325-ed9a6da27d43]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373741, 'tstamp': 373741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240478, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373746, 'tstamp': 373746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240478, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.750 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.752 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:37 compute-0 nova_compute[189485]: 2025-11-29 15:27:37.753 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.755 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.755 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.756 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:27:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:37.756 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.523 189489 DEBUG nova.compute.manager [req-eee7bce2-c702-4d87-90c2-d9aaf09c5ff3 req-32f1ccd3-6005-47d3-a2c1-974eb2cce1f2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.523 189489 DEBUG oslo_concurrency.lockutils [req-eee7bce2-c702-4d87-90c2-d9aaf09c5ff3 req-32f1ccd3-6005-47d3-a2c1-974eb2cce1f2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.524 189489 DEBUG oslo_concurrency.lockutils [req-eee7bce2-c702-4d87-90c2-d9aaf09c5ff3 req-32f1ccd3-6005-47d3-a2c1-974eb2cce1f2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.524 189489 DEBUG oslo_concurrency.lockutils [req-eee7bce2-c702-4d87-90c2-d9aaf09c5ff3 req-32f1ccd3-6005-47d3-a2c1-974eb2cce1f2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.524 189489 DEBUG nova.compute.manager [req-eee7bce2-c702-4d87-90c2-d9aaf09c5ff3 req-32f1ccd3-6005-47d3-a2c1-974eb2cce1f2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Processing event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.641 189489 DEBUG nova.network.neutron [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updated VIF entry in instance network info cache for port 7a530c9e-4765-4cce-b971-8ebbcff0880f. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.641 189489 DEBUG nova.network.neutron [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:27:38 compute-0 nova_compute[189485]: 2025-11-29 15:27:38.656 189489 DEBUG oslo_concurrency.lockutils [req-13596937-56db-4680-8aac-682045690dd3 req-e1f528cf-9f0f-41fd-9f79-bc829252db98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.788 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430059.7873347, 940da983-04c4-46c2-8cd4-96ce0736a67e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.789 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] VM Started (Lifecycle Event)#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.794 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.800 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.809 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.815 189489 INFO nova.virt.libvirt.driver [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance spawned successfully.#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.815 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.823 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.831 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.852 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.852 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.853 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.854 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.855 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.856 189489 DEBUG nova.virt.libvirt.driver [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.864 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.865 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430059.7876167, 940da983-04c4-46c2-8cd4-96ce0736a67e => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.865 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.946 189489 INFO nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Took 9.60 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.946 189489 DEBUG nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:27:39 compute-0 nova_compute[189485]: 2025-11-29 15:27:39.997 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.003 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430059.805928, 940da983-04c4-46c2-8cd4-96ce0736a67e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.003 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.021 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.023 189489 INFO nova.compute.manager [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Took 10.19 seconds to build instance.#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.030 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.047 189489 DEBUG oslo_concurrency.lockutils [None req-749820f3-8d48-46ec-94fc-14d1ea8228ff 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.292s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.730 189489 DEBUG nova.compute.manager [req-428e6df4-4500-4404-a651-b807dd6dfbdb req-09cb7800-fa41-4409-be91-c155488883db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.731 189489 DEBUG oslo_concurrency.lockutils [req-428e6df4-4500-4404-a651-b807dd6dfbdb req-09cb7800-fa41-4409-be91-c155488883db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.731 189489 DEBUG oslo_concurrency.lockutils [req-428e6df4-4500-4404-a651-b807dd6dfbdb req-09cb7800-fa41-4409-be91-c155488883db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.732 189489 DEBUG oslo_concurrency.lockutils [req-428e6df4-4500-4404-a651-b807dd6dfbdb req-09cb7800-fa41-4409-be91-c155488883db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.732 189489 DEBUG nova.compute.manager [req-428e6df4-4500-4404-a651-b807dd6dfbdb req-09cb7800-fa41-4409-be91-c155488883db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] No waiting events found dispatching network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:27:40 compute-0 nova_compute[189485]: 2025-11-29 15:27:40.732 189489 WARNING nova.compute.manager [req-428e6df4-4500-4404-a651-b807dd6dfbdb req-09cb7800-fa41-4409-be91-c155488883db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received unexpected event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f for instance with vm_state active and task_state None.#033[00m
Nov 29 15:27:41 compute-0 nova_compute[189485]: 2025-11-29 15:27:41.181 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:44 compute-0 nova_compute[189485]: 2025-11-29 15:27:44.795 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:44 compute-0 podman[240487]: 2025-11-29 15:27:44.840766406 +0000 UTC m=+0.131624245 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125)
Nov 29 15:27:46 compute-0 nova_compute[189485]: 2025-11-29 15:27:46.183 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:46 compute-0 podman[240508]: 2025-11-29 15:27:46.686253704 +0000 UTC m=+0.117969155 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, release=1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release-0.7.12=, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, distribution-scope=public)
Nov 29 15:27:46 compute-0 podman[240509]: 2025-11-29 15:27:46.70348538 +0000 UTC m=+0.134317188 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 15:27:46 compute-0 podman[240511]: 2025-11-29 15:27:46.734599183 +0000 UTC m=+0.147477295 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:27:46 compute-0 podman[240510]: 2025-11-29 15:27:46.736869405 +0000 UTC m=+0.153448457 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:27:48 compute-0 podman[240584]: 2025-11-29 15:27:48.657865177 +0000 UTC m=+0.103350820 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_id=edpm, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:27:49 compute-0 nova_compute[189485]: 2025-11-29 15:27:49.798 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:50 compute-0 podman[240606]: 2025-11-29 15:27:50.665047664 +0000 UTC m=+0.108772967 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 15:27:51 compute-0 nova_compute[189485]: 2025-11-29 15:27:51.189 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:54 compute-0 podman[240626]: 2025-11-29 15:27:54.699588383 +0000 UTC m=+0.143622751 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:27:54 compute-0 nova_compute[189485]: 2025-11-29 15:27:54.800 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:55 compute-0 nova_compute[189485]: 2025-11-29 15:27:55.512 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:56 compute-0 nova_compute[189485]: 2025-11-29 15:27:56.193 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:56 compute-0 nova_compute[189485]: 2025-11-29 15:27:56.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:56 compute-0 nova_compute[189485]: 2025-11-29 15:27:56.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:27:56 compute-0 nova_compute[189485]: 2025-11-29 15:27:56.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:27:57 compute-0 nova_compute[189485]: 2025-11-29 15:27:57.316 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:27:57 compute-0 nova_compute[189485]: 2025-11-29 15:27:57.317 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:27:57 compute-0 nova_compute[189485]: 2025-11-29 15:27:57.317 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:27:57 compute-0 nova_compute[189485]: 2025-11-29 15:27:57.318 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.841 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.856 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.857 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.858 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.859 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.898 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.899 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.899 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:58 compute-0 nova_compute[189485]: 2025-11-29 15:27:58.900 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.150 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:59.156 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:27:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:59.157 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:27:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:27:59.157 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.214 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.215 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.274 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.275 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.356 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.357 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.451 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.467 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.532 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.534 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.596 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.598 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.684 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.688 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:27:59 compute-0 podman[203677]: time="2025-11-29T15:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:27:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:27:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4766 "" "Go-http-client/1.1"
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.803 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:27:59 compute-0 nova_compute[189485]: 2025-11-29 15:27:59.808 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.120s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.240 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.242 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5090MB free_disk=72.3825798034668GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.242 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.243 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.445 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.446 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.447 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.448 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.539 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.561 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.605 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:28:00 compute-0 nova_compute[189485]: 2025-11-29 15:28:00.606 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.363s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.048 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.049 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0f7ef0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.058 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:28:01 compute-0 nova_compute[189485]: 2025-11-29 15:28:01.197 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:01 compute-0 nova_compute[189485]: 2025-11-29 15:28:01.232 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:28:01 compute-0 nova_compute[189485]: 2025-11-29 15:28:01.233 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:28:01 compute-0 nova_compute[189485]: 2025-11-29 15:28:01.234 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:28:01 compute-0 nova_compute[189485]: 2025-11-29 15:28:01.234 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: ERROR   15:28:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: ERROR   15:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: ERROR   15:28:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: ERROR   15:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: ERROR   15:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:28:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:28:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:01.495 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/b5d60fb8-b63e-4b0a-b908-00453be8ce37 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:28:02 compute-0 nova_compute[189485]: 2025-11-29 15:28:02.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:28:02 compute-0 nova_compute[189485]: 2025-11-29 15:28:02.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:28:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:02.581 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Sat, 29 Nov 2025 15:28:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-539d7d07-4e03-4fa5-9343-47c5925caa47 x-openstack-request-id: req-539d7d07-4e03-4fa5-9343-47c5925caa47 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:28:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:02.582 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "b5d60fb8-b63e-4b0a-b908-00453be8ce37", "name": "test_0", "status": "ACTIVE", "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "user_id": "5cbf094e2197487fbe16a0fe6e3076ba", "metadata": {}, "hostId": "3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17", "image": {"id": "a4b79580-904f-4527-8cf1-3888cf1ff785", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a4b79580-904f-4527-8cf1-3888cf1ff785"}]}, "flavor": {"id": "34af94d1-a6e1-4bf0-8957-036dc948fe9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/34af94d1-a6e1-4bf0-8957-036dc948fe9d"}]}, "created": "2025-11-29T15:26:06Z", "updated": "2025-11-29T15:26:19Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.142", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:91:00"}, {"version": 4, "addr": "192.168.122.215", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:da:91:00"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/b5d60fb8-b63e-4b0a-b908-00453be8ce37"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/b5d60fb8-b63e-4b0a-b908-00453be8ce37"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-29T15:26:18.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:28:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:02.582 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/b5d60fb8-b63e-4b0a-b908-00453be8ce37 used request id req-539d7d07-4e03-4fa5-9343-47c5925caa47 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:28:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:02.583 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:28:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:02.585 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 940da983-04c4-46c2-8cd4-96ce0736a67e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:28:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:02.586 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/940da983-04c4-46c2-8cd4-96ce0736a67e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.485 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Sat, 29 Nov 2025 15:28:02 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-d45d9bda-d561-4b62-9750-5aa979aa6745 x-openstack-request-id: req-d45d9bda-d561-4b62-9750-5aa979aa6745 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.486 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "940da983-04c4-46c2-8cd4-96ce0736a67e", "name": "vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm", "status": "ACTIVE", "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "user_id": "5cbf094e2197487fbe16a0fe6e3076ba", "metadata": {"metering.server_group": "cf461906-40b9-4ac3-86c2-0d606dd14d99"}, "hostId": "3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17", "image": {"id": "a4b79580-904f-4527-8cf1-3888cf1ff785", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a4b79580-904f-4527-8cf1-3888cf1ff785"}]}, "flavor": {"id": "34af94d1-a6e1-4bf0-8957-036dc948fe9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/34af94d1-a6e1-4bf0-8957-036dc948fe9d"}]}, "created": "2025-11-29T15:27:28Z", "updated": "2025-11-29T15:27:39Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.24", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:56:61:08"}, {"version": 4, "addr": "192.168.122.226", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:56:61:08"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/940da983-04c4-46c2-8cd4-96ce0736a67e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/940da983-04c4-46c2-8cd4-96ce0736a67e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-29T15:27:39.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.486 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/940da983-04c4-46c2-8cd4-96ce0736a67e used request id req-d45d9bda-d561-4b62-9750-5aa979aa6745 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.487 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'name': 'vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:28:03.488842) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.497 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for b5d60fb8-b63e-4b0a-b908-00453be8ce37 / tap71c1eec4-61 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.497 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2132 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.504 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 940da983-04c4-46c2-8cd4-96ce0736a67e / tap7a530c9e-47 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.504 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.505 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.507 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.507 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.508 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.508 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:28:03.507851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.510 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.511 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.511 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.511 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.511 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.512 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:28:03.511956) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.544 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.575 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.576 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 940da983-04c4-46c2-8cd4-96ce0736a67e: ceilometer.compute.pollsters.NoVolumeException
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.577 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.577 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.577 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.577 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.578 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.578 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:28:03.578155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.579 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.580 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.580 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.580 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.581 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.581 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.581 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.582 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.582 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-29T15:28:03.581393) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.582 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm>]
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.583 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.584 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.584 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:28:03.584108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.586 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.586 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.586 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.586 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.587 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.587 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.587 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.587 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:28:03.585883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.587 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:28:03.587562) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.649 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.650 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.650 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.723 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.724 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.724 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.726 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.726 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:28:03.726063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:28:03.727817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.749 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.750 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.750 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.779 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.780 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.780 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.780 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.781 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.781 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.781 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 32850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.782 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/cpu volume: 23290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.782 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.782 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.783 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.783 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.783 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.783 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.783 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.784 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 364658786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.784 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.784 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 1085510 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.785 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.785 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.786 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.786 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.786 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.786 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:28:03.781526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.786 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.786 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:28:03.783186) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.787 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>, <NovaLikeServer: vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>, <NovaLikeServer: vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm>]
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.787 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.787 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-29T15:28:03.786601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.788 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.788 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.788 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.788 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:28:03.788069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.788 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.789 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.789 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.789 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.790 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.790 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.791 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.791 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.791 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.792 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.792 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.793 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.794 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.794 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.794 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.795 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.795 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.795 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.796 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.796 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.797 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.798 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.799 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.799 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:28:03.790905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.799 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:28:03.794868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.800 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:28:03.798996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.800 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.800 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.800 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.801 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.802 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.802 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.802 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.803 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.803 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.804 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.804 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.805 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:28:03.802998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.805 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.806 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.806 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:28:03.805515) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.807 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.807 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.808 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.810 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.810 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.811 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:28:03.809778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.812 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.813 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.813 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.813 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.814 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.814 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.814 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.814 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.815 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.816 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:28:03.812120) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.816 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:28:03.813402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:28:03.816314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.818 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.819 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.819 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:28:03.818048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:28:03.819303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.821 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.821 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.822 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.822 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.822 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.827 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.827 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:28:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:28:03.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:28:03.821039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:28:04 compute-0 nova_compute[189485]: 2025-11-29 15:28:04.807 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:06 compute-0 nova_compute[189485]: 2025-11-29 15:28:06.200 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:07 compute-0 podman[240677]: 2025-11-29 15:28:07.622254003 +0000 UTC m=+0.074058157 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:28:07 compute-0 ovn_controller[97827]: 2025-11-29T15:28:07Z|00039|memory_trim|INFO|Detected inactivity (last active 30008 ms ago): trimming memory
Nov 29 15:28:09 compute-0 nova_compute[189485]: 2025-11-29 15:28:09.809 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:11 compute-0 nova_compute[189485]: 2025-11-29 15:28:11.203 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:12 compute-0 ovn_controller[97827]: 2025-11-29T15:28:12Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:56:61:08 192.168.0.24
Nov 29 15:28:12 compute-0 ovn_controller[97827]: 2025-11-29T15:28:12Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:56:61:08 192.168.0.24
Nov 29 15:28:14 compute-0 nova_compute[189485]: 2025-11-29 15:28:14.812 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:15 compute-0 podman[240713]: 2025-11-29 15:28:15.65607975 +0000 UTC m=+0.101186861 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Nov 29 15:28:16 compute-0 nova_compute[189485]: 2025-11-29 15:28:16.207 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:17 compute-0 podman[240732]: 2025-11-29 15:28:17.678246686 +0000 UTC m=+0.106328890 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 15:28:17 compute-0 podman[240731]: 2025-11-29 15:28:17.684904376 +0000 UTC m=+0.118924591 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, release=1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public)
Nov 29 15:28:17 compute-0 podman[240733]: 2025-11-29 15:28:17.685570524 +0000 UTC m=+0.104936572 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:28:17 compute-0 podman[240734]: 2025-11-29 15:28:17.717056767 +0000 UTC m=+0.142506280 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 15:28:19 compute-0 podman[240813]: 2025-11-29 15:28:19.656266885 +0000 UTC m=+0.112366373 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter)
Nov 29 15:28:19 compute-0 nova_compute[189485]: 2025-11-29 15:28:19.815 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:21 compute-0 nova_compute[189485]: 2025-11-29 15:28:21.211 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:21 compute-0 podman[240836]: 2025-11-29 15:28:21.655480959 +0000 UTC m=+0.106438503 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:28:24 compute-0 nova_compute[189485]: 2025-11-29 15:28:24.818 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:25 compute-0 podman[240857]: 2025-11-29 15:28:25.661980031 +0000 UTC m=+0.101027557 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:28:26 compute-0 nova_compute[189485]: 2025-11-29 15:28:26.215 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:29 compute-0 podman[203677]: time="2025-11-29T15:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:28:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:28:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4762 "" "Go-http-client/1.1"
Nov 29 15:28:29 compute-0 nova_compute[189485]: 2025-11-29 15:28:29.821 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:31 compute-0 nova_compute[189485]: 2025-11-29 15:28:31.219 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: ERROR   15:28:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: ERROR   15:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: ERROR   15:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: ERROR   15:28:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: ERROR   15:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:28:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:28:34 compute-0 nova_compute[189485]: 2025-11-29 15:28:34.824 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:36 compute-0 nova_compute[189485]: 2025-11-29 15:28:36.224 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:38 compute-0 podman[240883]: 2025-11-29 15:28:38.631785921 +0000 UTC m=+0.081413166 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:28:39 compute-0 nova_compute[189485]: 2025-11-29 15:28:39.828 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:41 compute-0 nova_compute[189485]: 2025-11-29 15:28:41.229 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:44 compute-0 nova_compute[189485]: 2025-11-29 15:28:44.831 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:46 compute-0 nova_compute[189485]: 2025-11-29 15:28:46.233 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:46 compute-0 podman[240906]: 2025-11-29 15:28:46.655327207 +0000 UTC m=+0.094218162 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:28:48 compute-0 podman[240925]: 2025-11-29 15:28:48.730092854 +0000 UTC m=+0.067399666 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 15:28:48 compute-0 podman[240930]: 2025-11-29 15:28:48.773475249 +0000 UTC m=+0.090477871 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 29 15:28:48 compute-0 podman[240924]: 2025-11-29 15:28:48.77830073 +0000 UTC m=+0.122575051 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, architecture=x86_64, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-type=git, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 29 15:28:48 compute-0 podman[240936]: 2025-11-29 15:28:48.810441461 +0000 UTC m=+0.130814234 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:28:49 compute-0 nova_compute[189485]: 2025-11-29 15:28:49.834 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:50 compute-0 podman[241004]: 2025-11-29 15:28:50.681225355 +0000 UTC m=+0.117647087 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Nov 29 15:28:51 compute-0 nova_compute[189485]: 2025-11-29 15:28:51.238 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:52 compute-0 podman[241025]: 2025-11-29 15:28:52.677340243 +0000 UTC m=+0.113867384 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:28:54 compute-0 nova_compute[189485]: 2025-11-29 15:28:54.837 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:56 compute-0 nova_compute[189485]: 2025-11-29 15:28:56.242 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:28:56 compute-0 podman[241044]: 2025-11-29 15:28:56.655668227 +0000 UTC m=+0.097219295 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:28:57 compute-0 nova_compute[189485]: 2025-11-29 15:28:57.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:28:57 compute-0 nova_compute[189485]: 2025-11-29 15:28:57.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:28:58 compute-0 nova_compute[189485]: 2025-11-29 15:28:58.388 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:28:58 compute-0 nova_compute[189485]: 2025-11-29 15:28:58.389 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:28:58 compute-0 nova_compute[189485]: 2025-11-29 15:28:58.389 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:28:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:28:59.157 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:28:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:28:59.158 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:28:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:28:59.158 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:28:59 compute-0 podman[203677]: time="2025-11-29T15:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:28:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:28:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4768 "" "Go-http-client/1.1"
Nov 29 15:28:59 compute-0 nova_compute[189485]: 2025-11-29 15:28:59.840 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.762 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.781 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.782 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.783 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.784 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.785 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.812 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.813 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.814 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.815 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:29:00 compute-0 nova_compute[189485]: 2025-11-29 15:29:00.927 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.019 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.020 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.119 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.120 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.220 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.221 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.247 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.306 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.315 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.411 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.412 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: ERROR   15:29:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: ERROR   15:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: ERROR   15:29:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: ERROR   15:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: ERROR   15:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:29:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.491 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.492 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.577 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.578 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:29:01 compute-0 nova_compute[189485]: 2025-11-29 15:29:01.664 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.135 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.136 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5043MB free_disk=72.36108016967773GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.136 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.136 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.229 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.230 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.230 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.230 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.335 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.358 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.361 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:29:02 compute-0 nova_compute[189485]: 2025-11-29 15:29:02.362 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.225s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:29:03 compute-0 nova_compute[189485]: 2025-11-29 15:29:03.061 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:03 compute-0 nova_compute[189485]: 2025-11-29 15:29:03.061 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:03 compute-0 nova_compute[189485]: 2025-11-29 15:29:03.061 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:03 compute-0 nova_compute[189485]: 2025-11-29 15:29:03.062 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:04 compute-0 nova_compute[189485]: 2025-11-29 15:29:04.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:04 compute-0 nova_compute[189485]: 2025-11-29 15:29:04.505 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:04 compute-0 nova_compute[189485]: 2025-11-29 15:29:04.506 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:29:04 compute-0 nova_compute[189485]: 2025-11-29 15:29:04.843 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:06 compute-0 nova_compute[189485]: 2025-11-29 15:29:06.252 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:09 compute-0 podman[241091]: 2025-11-29 15:29:09.705229282 +0000 UTC m=+0.133348862 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:29:09 compute-0 nova_compute[189485]: 2025-11-29 15:29:09.846 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:11 compute-0 nova_compute[189485]: 2025-11-29 15:29:11.258 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:14 compute-0 nova_compute[189485]: 2025-11-29 15:29:14.849 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:16 compute-0 nova_compute[189485]: 2025-11-29 15:29:16.261 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:17 compute-0 podman[241116]: 2025-11-29 15:29:17.643107148 +0000 UTC m=+0.091804216 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true)
Nov 29 15:29:19 compute-0 podman[241137]: 2025-11-29 15:29:19.684288556 +0000 UTC m=+0.123846545 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public)
Nov 29 15:29:19 compute-0 podman[241138]: 2025-11-29 15:29:19.688101989 +0000 UTC m=+0.121234364 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:29:19 compute-0 podman[241139]: 2025-11-29 15:29:19.691637554 +0000 UTC m=+0.137257028 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:29:19 compute-0 podman[241140]: 2025-11-29 15:29:19.728912054 +0000 UTC m=+0.158167624 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:29:19 compute-0 nova_compute[189485]: 2025-11-29 15:29:19.851 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:21 compute-0 nova_compute[189485]: 2025-11-29 15:29:21.266 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:21 compute-0 podman[241214]: 2025-11-29 15:29:21.662495868 +0000 UTC m=+0.111783759 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7)
Nov 29 15:29:23 compute-0 podman[241233]: 2025-11-29 15:29:23.693097068 +0000 UTC m=+0.130208927 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:29:24 compute-0 nova_compute[189485]: 2025-11-29 15:29:24.856 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:26 compute-0 nova_compute[189485]: 2025-11-29 15:29:26.271 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:27 compute-0 podman[241253]: 2025-11-29 15:29:27.667425178 +0000 UTC m=+0.105152419 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:29:29 compute-0 podman[203677]: time="2025-11-29T15:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:29:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:29:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Nov 29 15:29:29 compute-0 nova_compute[189485]: 2025-11-29 15:29:29.859 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:31 compute-0 nova_compute[189485]: 2025-11-29 15:29:31.277 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: ERROR   15:29:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: ERROR   15:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: ERROR   15:29:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: ERROR   15:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: ERROR   15:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:29:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:29:34 compute-0 nova_compute[189485]: 2025-11-29 15:29:34.862 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:36 compute-0 nova_compute[189485]: 2025-11-29 15:29:36.281 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:39 compute-0 nova_compute[189485]: 2025-11-29 15:29:39.865 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:40 compute-0 podman[241278]: 2025-11-29 15:29:40.630417428 +0000 UTC m=+0.080998154 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:29:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 15:29:41 compute-0 nova_compute[189485]: 2025-11-29 15:29:41.286 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:44 compute-0 nova_compute[189485]: 2025-11-29 15:29:44.870 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:46 compute-0 nova_compute[189485]: 2025-11-29 15:29:46.291 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:48 compute-0 podman[241303]: 2025-11-29 15:29:48.814303427 +0000 UTC m=+0.261940361 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:29:49 compute-0 nova_compute[189485]: 2025-11-29 15:29:49.870 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:50 compute-0 podman[241324]: 2025-11-29 15:29:50.660804329 +0000 UTC m=+0.103311723 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible)
Nov 29 15:29:50 compute-0 podman[241325]: 2025-11-29 15:29:50.664390426 +0000 UTC m=+0.110188169 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 15:29:50 compute-0 podman[241326]: 2025-11-29 15:29:50.698236888 +0000 UTC m=+0.125795765 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:29:50 compute-0 podman[241329]: 2025-11-29 15:29:50.7174464 +0000 UTC m=+0.141018368 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:29:51 compute-0 nova_compute[189485]: 2025-11-29 15:29:51.294 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:52 compute-0 podman[241404]: 2025-11-29 15:29:52.686719274 +0000 UTC m=+0.119649318 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, release=1755695350, architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:29:54 compute-0 podman[241425]: 2025-11-29 15:29:54.693032526 +0000 UTC m=+0.135270103 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 15:29:54 compute-0 nova_compute[189485]: 2025-11-29 15:29:54.874 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:56 compute-0 nova_compute[189485]: 2025-11-29 15:29:56.297 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:29:57 compute-0 nova_compute[189485]: 2025-11-29 15:29:57.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:58 compute-0 nova_compute[189485]: 2025-11-29 15:29:58.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:29:58 compute-0 nova_compute[189485]: 2025-11-29 15:29:58.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:29:58 compute-0 nova_compute[189485]: 2025-11-29 15:29:58.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:29:58 compute-0 podman[241446]: 2025-11-29 15:29:58.677225995 +0000 UTC m=+0.124099458 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:29:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:29:59.159 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:29:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:29:59.159 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:29:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:29:59.160 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:29:59 compute-0 nova_compute[189485]: 2025-11-29 15:29:59.432 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:29:59 compute-0 nova_compute[189485]: 2025-11-29 15:29:59.438 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:29:59 compute-0 nova_compute[189485]: 2025-11-29 15:29:59.439 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:29:59 compute-0 nova_compute[189485]: 2025-11-29 15:29:59.440 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:29:59 compute-0 podman[203677]: time="2025-11-29T15:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:29:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:29:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4772 "" "Go-http-client/1.1"
Nov 29 15:29:59 compute-0 nova_compute[189485]: 2025-11-29 15:29:59.875 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.049 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.049 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.049 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.049 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.051 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.056 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.059 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'name': 'vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.059 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.059 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.059 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.059 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.061 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:30:01.059892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.063 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.066 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes volume: 4558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes.delta volume: 4558 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:30:01.067543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:30:01.068733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.090 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.112 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.113 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:30:01.113797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.114 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes.delta volume: 4759 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:30:01.115243) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.116 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets volume: 38 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:30:01.116382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:30:01.117527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.185 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.186 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.186 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.257 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.258 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.258 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.258 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.258 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.258 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:30:01.259123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.260 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.260 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.260 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.260 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:30:01.260290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.285 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.285 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.285 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 nova_compute[189485]: 2025-11-29 15:30:01.301 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.323 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.323 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.324 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.324 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.325 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 34610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.325 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/cpu volume: 79450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:30:01.325078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.326 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.327 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:30:01.326527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.327 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 490412710 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.327 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 89716861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 69907902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.329 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.329 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.329 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.329 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.330 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.330 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.330 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:30:01.329089) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:30:01.331332) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.331 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.332 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.332 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.332 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.332 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.333 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.334 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.334 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.334 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.334 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:30:01.333314) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.335 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.336 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 1591768972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:30:01.335292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.336 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 9381814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.336 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.336 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.336 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:30:01.337205) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.338 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.339 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:30:01.338490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.339 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.339 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.339 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.340 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.340 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.340 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.340 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.340 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.341 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.341 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.341 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:30:01.340938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.343 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.344 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:30:01.342410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.344 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.344 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:30:01.343278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.344 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.345 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.346 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:30:01.345760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.346 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.346 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.346 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.346 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.347 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.347 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.347 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.347 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:30:01.347197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.348 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.349 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:30:01.348443) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.349 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.350 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.350 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.351 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.352 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.353 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.354 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.355 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:30:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:30:01.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:30:01.349989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: ERROR   15:30:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: ERROR   15:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: ERROR   15:30:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: ERROR   15:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: ERROR   15:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:30:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.809 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.830 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.831 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.832 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.832 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.833 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.833 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.834 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.867 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.868 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.868 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.868 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:30:02 compute-0 nova_compute[189485]: 2025-11-29 15:30:02.966 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.074 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.079 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.153 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.156 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.226 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.229 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.299 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.310 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.386 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.390 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.455 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.457 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.521 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.522 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:30:03 compute-0 nova_compute[189485]: 2025-11-29 15:30:03.588 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.031 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.033 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=72.36108016967773GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.034 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.035 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.132 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.132 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.133 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.133 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.221 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.234 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.236 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.236 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.201s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:30:04 compute-0 nova_compute[189485]: 2025-11-29 15:30:04.877 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:06 compute-0 nova_compute[189485]: 2025-11-29 15:30:06.319 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:07 compute-0 nova_compute[189485]: 2025-11-29 15:30:07.232 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:07 compute-0 nova_compute[189485]: 2025-11-29 15:30:07.233 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:07 compute-0 nova_compute[189485]: 2025-11-29 15:30:07.234 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:30:09 compute-0 nova_compute[189485]: 2025-11-29 15:30:09.879 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:11 compute-0 nova_compute[189485]: 2025-11-29 15:30:11.320 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:11 compute-0 podman[241497]: 2025-11-29 15:30:11.664064367 +0000 UTC m=+0.104760983 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:30:14 compute-0 nova_compute[189485]: 2025-11-29 15:30:14.881 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:16 compute-0 nova_compute[189485]: 2025-11-29 15:30:16.326 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:19 compute-0 podman[241522]: 2025-11-29 15:30:19.639845636 +0000 UTC m=+0.097426992 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 29 15:30:19 compute-0 nova_compute[189485]: 2025-11-29 15:30:19.887 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:21 compute-0 nova_compute[189485]: 2025-11-29 15:30:21.330 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:21 compute-0 podman[241544]: 2025-11-29 15:30:21.665261789 +0000 UTC m=+0.094460583 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:30:21 compute-0 podman[241543]: 2025-11-29 15:30:21.686543738 +0000 UTC m=+0.130308588 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:30:21 compute-0 podman[241542]: 2025-11-29 15:30:21.695768478 +0000 UTC m=+0.132559518 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, config_id=edpm, version=9.4, container_name=kepler, managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:30:21 compute-0 podman[241545]: 2025-11-29 15:30:21.714602491 +0000 UTC m=+0.139162028 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:30:23 compute-0 podman[241620]: 2025-11-29 15:30:23.717360196 +0000 UTC m=+0.156784949 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.expose-services=, container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350)
Nov 29 15:30:24 compute-0 nova_compute[189485]: 2025-11-29 15:30:24.888 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:25 compute-0 podman[241639]: 2025-11-29 15:30:25.702406089 +0000 UTC m=+0.142044277 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 15:30:26 compute-0 nova_compute[189485]: 2025-11-29 15:30:26.334 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:29 compute-0 podman[241657]: 2025-11-29 15:30:29.65432989 +0000 UTC m=+0.100233339 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 15:30:29 compute-0 podman[203677]: time="2025-11-29T15:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:30:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:30:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Nov 29 15:30:29 compute-0 nova_compute[189485]: 2025-11-29 15:30:29.892 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:31 compute-0 nova_compute[189485]: 2025-11-29 15:30:31.339 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: ERROR   15:30:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: ERROR   15:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: ERROR   15:30:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: ERROR   15:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: ERROR   15:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:30:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:30:34 compute-0 nova_compute[189485]: 2025-11-29 15:30:34.896 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:36 compute-0 nova_compute[189485]: 2025-11-29 15:30:36.342 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:39 compute-0 nova_compute[189485]: 2025-11-29 15:30:39.898 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:41 compute-0 nova_compute[189485]: 2025-11-29 15:30:41.346 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:42 compute-0 podman[241682]: 2025-11-29 15:30:42.64112295 +0000 UTC m=+0.085329004 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:30:44 compute-0 nova_compute[189485]: 2025-11-29 15:30:44.903 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:46 compute-0 nova_compute[189485]: 2025-11-29 15:30:46.352 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:49 compute-0 nova_compute[189485]: 2025-11-29 15:30:49.906 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:50 compute-0 podman[241706]: 2025-11-29 15:30:50.66282695 +0000 UTC m=+0.101667929 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 15:30:51 compute-0 nova_compute[189485]: 2025-11-29 15:30:51.357 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:52 compute-0 podman[241726]: 2025-11-29 15:30:52.642316621 +0000 UTC m=+0.092827947 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.4, container_name=kepler, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Nov 29 15:30:52 compute-0 podman[241727]: 2025-11-29 15:30:52.666826958 +0000 UTC m=+0.106224242 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:30:52 compute-0 podman[241733]: 2025-11-29 15:30:52.679148184 +0000 UTC m=+0.100941189 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 15:30:52 compute-0 podman[241734]: 2025-11-29 15:30:52.707075824 +0000 UTC m=+0.118660281 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 15:30:54 compute-0 podman[241807]: 2025-11-29 15:30:54.646060533 +0000 UTC m=+0.095337896 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, release=1755695350, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 29 15:30:54 compute-0 nova_compute[189485]: 2025-11-29 15:30:54.909 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:56 compute-0 nova_compute[189485]: 2025-11-29 15:30:56.361 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:30:56 compute-0 podman[241827]: 2025-11-29 15:30:56.672286316 +0000 UTC m=+0.126047582 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:30:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:30:59.160 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:30:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:30:59.161 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:30:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:30:59.162 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:30:59 compute-0 nova_compute[189485]: 2025-11-29 15:30:59.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:30:59 compute-0 podman[203677]: time="2025-11-29T15:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:30:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:30:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4780 "" "Go-http-client/1.1"
Nov 29 15:30:59 compute-0 nova_compute[189485]: 2025-11-29 15:30:59.910 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:00 compute-0 nova_compute[189485]: 2025-11-29 15:31:00.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:00 compute-0 nova_compute[189485]: 2025-11-29 15:31:00.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:31:00 compute-0 podman[241847]: 2025-11-29 15:31:00.665389718 +0000 UTC m=+0.110897789 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:31:01 compute-0 nova_compute[189485]: 2025-11-29 15:31:01.366 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: ERROR   15:31:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: ERROR   15:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: ERROR   15:31:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: ERROR   15:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: ERROR   15:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:31:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:31:01 compute-0 nova_compute[189485]: 2025-11-29 15:31:01.551 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:31:01 compute-0 nova_compute[189485]: 2025-11-29 15:31:01.552 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:31:01 compute-0 nova_compute[189485]: 2025-11-29 15:31:01.553 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.765 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.784 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.785 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.787 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.788 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.789 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.790 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.791 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.823 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.824 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.825 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.826 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.913 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:04 compute-0 nova_compute[189485]: 2025-11-29 15:31:04.934 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.030 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.033 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.136 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.138 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.230 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.232 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.331 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.343 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.443 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.445 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.550 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.552 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.634 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.636 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:31:05 compute-0 nova_compute[189485]: 2025-11-29 15:31:05.708 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.204 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.207 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5049MB free_disk=72.36108016967773GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.208 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.208 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.370 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.503 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.505 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.506 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.507 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.573 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.592 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.595 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:31:06 compute-0 nova_compute[189485]: 2025-11-29 15:31:06.596 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:31:09 compute-0 nova_compute[189485]: 2025-11-29 15:31:09.593 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:09 compute-0 nova_compute[189485]: 2025-11-29 15:31:09.594 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:09 compute-0 nova_compute[189485]: 2025-11-29 15:31:09.620 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:31:09 compute-0 nova_compute[189485]: 2025-11-29 15:31:09.621 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:31:09 compute-0 nova_compute[189485]: 2025-11-29 15:31:09.918 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:11 compute-0 nova_compute[189485]: 2025-11-29 15:31:11.374 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:13 compute-0 podman[241896]: 2025-11-29 15:31:13.698333904 +0000 UTC m=+0.124014186 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:31:14 compute-0 nova_compute[189485]: 2025-11-29 15:31:14.922 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:16 compute-0 nova_compute[189485]: 2025-11-29 15:31:16.379 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:19 compute-0 nova_compute[189485]: 2025-11-29 15:31:19.921 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:21 compute-0 nova_compute[189485]: 2025-11-29 15:31:21.384 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:21 compute-0 podman[241921]: 2025-11-29 15:31:21.635480983 +0000 UTC m=+0.090184706 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true)
Nov 29 15:31:23 compute-0 podman[241942]: 2025-11-29 15:31:23.638146995 +0000 UTC m=+0.090318069 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, distribution-scope=public, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, name=ubi9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 15:31:23 compute-0 podman[241944]: 2025-11-29 15:31:23.659114445 +0000 UTC m=+0.095189592 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi)
Nov 29 15:31:23 compute-0 podman[241943]: 2025-11-29 15:31:23.6625705 +0000 UTC m=+0.095040008 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:31:23 compute-0 podman[241950]: 2025-11-29 15:31:23.724819054 +0000 UTC m=+0.150787125 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:31:24 compute-0 nova_compute[189485]: 2025-11-29 15:31:24.926 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:25 compute-0 podman[242022]: 2025-11-29 15:31:25.701882269 +0000 UTC m=+0.142044877 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, version=9.6, architecture=x86_64, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Nov 29 15:31:26 compute-0 nova_compute[189485]: 2025-11-29 15:31:26.390 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:27 compute-0 podman[242043]: 2025-11-29 15:31:27.672156121 +0000 UTC m=+0.108575537 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:31:29 compute-0 podman[203677]: time="2025-11-29T15:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:31:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:31:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4772 "" "Go-http-client/1.1"
Nov 29 15:31:29 compute-0 nova_compute[189485]: 2025-11-29 15:31:29.928 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:31 compute-0 nova_compute[189485]: 2025-11-29 15:31:31.396 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: ERROR   15:31:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: ERROR   15:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: ERROR   15:31:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: ERROR   15:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: ERROR   15:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:31:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:31:31 compute-0 podman[242062]: 2025-11-29 15:31:31.707617636 +0000 UTC m=+0.137361810 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:31:34 compute-0 nova_compute[189485]: 2025-11-29 15:31:34.931 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:36 compute-0 nova_compute[189485]: 2025-11-29 15:31:36.401 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:39 compute-0 nova_compute[189485]: 2025-11-29 15:31:39.934 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:41 compute-0 nova_compute[189485]: 2025-11-29 15:31:41.404 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:44 compute-0 podman[242086]: 2025-11-29 15:31:44.70561971 +0000 UTC m=+0.135771417 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:31:44 compute-0 nova_compute[189485]: 2025-11-29 15:31:44.937 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:46 compute-0 nova_compute[189485]: 2025-11-29 15:31:46.407 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:49 compute-0 nova_compute[189485]: 2025-11-29 15:31:49.938 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:51 compute-0 nova_compute[189485]: 2025-11-29 15:31:51.412 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:52 compute-0 podman[242110]: 2025-11-29 15:31:52.695834573 +0000 UTC m=+0.142689115 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Nov 29 15:31:54 compute-0 podman[242131]: 2025-11-29 15:31:54.665401234 +0000 UTC m=+0.106263793 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:31:54 compute-0 podman[242130]: 2025-11-29 15:31:54.671238833 +0000 UTC m=+0.103070357 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, container_name=kepler, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, vcs-type=git, version=9.4, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 15:31:54 compute-0 podman[242132]: 2025-11-29 15:31:54.6718862 +0000 UTC m=+0.102830079 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:31:54 compute-0 podman[242138]: 2025-11-29 15:31:54.728328727 +0000 UTC m=+0.152115031 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 15:31:54 compute-0 nova_compute[189485]: 2025-11-29 15:31:54.939 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:56 compute-0 nova_compute[189485]: 2025-11-29 15:31:56.416 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:31:56 compute-0 podman[242214]: 2025-11-29 15:31:56.722303633 +0000 UTC m=+0.157867138 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Nov 29 15:31:58 compute-0 podman[242235]: 2025-11-29 15:31:58.649119401 +0000 UTC m=+0.091122639 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 15:31:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:31:59.162 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:31:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:31:59.162 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:31:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:31:59.163 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:31:59 compute-0 podman[203677]: time="2025-11-29T15:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:31:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:31:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4772 "" "Go-http-client/1.1"
Nov 29 15:31:59 compute-0 nova_compute[189485]: 2025-11-29 15:31:59.941 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:00 compute-0 nova_compute[189485]: 2025-11-29 15:32:00.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:00 compute-0 nova_compute[189485]: 2025-11-29 15:32:00.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:32:00 compute-0 nova_compute[189485]: 2025-11-29 15:32:00.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.050 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.050 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.050 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.061 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'name': 'vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.065 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.066 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.066 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.066 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.067 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:32:01.066571) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.072 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.077 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes volume: 4628 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.080 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:32:01.079601) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.080 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.082 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.083 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:32:01.082821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.111 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.9140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.145 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.146 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.147 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:32:01.147008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.148 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes volume: 4849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.148 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.149 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.149 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.150 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.150 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:32:01.150067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.151 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:32:01.152623) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.153 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.153 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets volume: 39 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:32:01.155496) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.239 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.239 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.240 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.348 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.349 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.350 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.352 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.352 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.352 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.353 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:32:01.352747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.354 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.354 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.355 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.355 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.356 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.356 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.356 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:32:01.356208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.397 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.397 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.398 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: ERROR   15:32:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: ERROR   15:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: ERROR   15:32:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: ERROR   15:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: ERROR   15:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:32:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:32:01 compute-0 nova_compute[189485]: 2025-11-29 15:32:01.422 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.442 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.442 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.443 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.444 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.444 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.444 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.445 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.445 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 36390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.446 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/cpu volume: 199260000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.446 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:32:01.445310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.447 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.448 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:32:01.448417) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.448 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.449 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.449 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.450 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 490412710 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.450 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 89716861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.451 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 69907902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.452 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.453 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.454 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.454 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.455 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.456 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.456 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:32:01.453498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.458 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.458 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.459 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:32:01.459085) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.460 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.460 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.461 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.461 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.462 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.463 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.463 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:32:01.463635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.464 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.464 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.464 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.464 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.465 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.465 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:32:01.466311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.466 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.467 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.467 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 1591768972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.467 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 9381814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.467 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.468 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.469 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.469 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:32:01.469074) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:32:01.470576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.470 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.471 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.471 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.471 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.471 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.472 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.472 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.472 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.473 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.473 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.473 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.474 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.474 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.474 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.475 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.475 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:32:01.473278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:32:01.474776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:32:01.476254) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.476 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.477 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.477 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.477 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.477 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.478 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.478 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.478 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.478 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.478 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.478 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.479 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.479 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.479 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.480 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.480 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.480 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:32:01.478934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.481 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:32:01.480569) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.481 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.482 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.482 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.482 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.483 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:32:01.482215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.483 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.483 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.483 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.484 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.484 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.484 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:32:01.484058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.485 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.486 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.487 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:32:01.488 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:32:01 compute-0 nova_compute[189485]: 2025-11-29 15:32:01.564 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:32:01 compute-0 nova_compute[189485]: 2025-11-29 15:32:01.564 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:32:01 compute-0 nova_compute[189485]: 2025-11-29 15:32:01.564 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:32:01 compute-0 nova_compute[189485]: 2025-11-29 15:32:01.565 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:32:02 compute-0 podman[242254]: 2025-11-29 15:32:02.682362867 +0000 UTC m=+0.126030977 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:32:04 compute-0 nova_compute[189485]: 2025-11-29 15:32:04.945 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.615 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.636 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.637 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.638 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.638 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.638 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.639 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.639 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.640 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.672 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.672 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.673 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.673 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.760 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.853 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.855 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.934 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:05 compute-0 nova_compute[189485]: 2025-11-29 15:32:05.935 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.022 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.024 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.085 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.092 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.188 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.190 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.245 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.247 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.307 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.308 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.366 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.425 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.742 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.744 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5045MB free_disk=72.36110305786133GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.745 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:06 compute-0 nova_compute[189485]: 2025-11-29 15:32:06.745 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.052 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.053 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.054 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.055 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.125 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.187 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.188 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.210 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.236 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.309 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.326 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.328 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.329 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.584s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.330 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.331 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.344 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:32:07 compute-0 nova_compute[189485]: 2025-11-29 15:32:07.345 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:09 compute-0 nova_compute[189485]: 2025-11-29 15:32:09.493 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:09 compute-0 nova_compute[189485]: 2025-11-29 15:32:09.494 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:09 compute-0 nova_compute[189485]: 2025-11-29 15:32:09.494 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:32:09 compute-0 nova_compute[189485]: 2025-11-29 15:32:09.495 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:32:09 compute-0 nova_compute[189485]: 2025-11-29 15:32:09.496 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:32:09 compute-0 nova_compute[189485]: 2025-11-29 15:32:09.947 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:11 compute-0 nova_compute[189485]: 2025-11-29 15:32:11.431 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:14 compute-0 nova_compute[189485]: 2025-11-29 15:32:14.950 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:15 compute-0 podman[242302]: 2025-11-29 15:32:15.661197815 +0000 UTC m=+0.101640832 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:32:16 compute-0 nova_compute[189485]: 2025-11-29 15:32:16.434 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:19 compute-0 nova_compute[189485]: 2025-11-29 15:32:19.950 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:21 compute-0 nova_compute[189485]: 2025-11-29 15:32:21.437 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:23 compute-0 podman[242327]: 2025-11-29 15:32:23.67712715 +0000 UTC m=+0.114822115 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 15:32:24 compute-0 nova_compute[189485]: 2025-11-29 15:32:24.954 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:25 compute-0 podman[242347]: 2025-11-29 15:32:25.64650598 +0000 UTC m=+0.083741611 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 15:32:25 compute-0 podman[242353]: 2025-11-29 15:32:25.67927247 +0000 UTC m=+0.112388350 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 29 15:32:25 compute-0 podman[242346]: 2025-11-29 15:32:25.694324705 +0000 UTC m=+0.142889830 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, version=9.4, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, config_id=edpm, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 15:32:25 compute-0 podman[242354]: 2025-11-29 15:32:25.728381859 +0000 UTC m=+0.154652375 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 15:32:26 compute-0 nova_compute[189485]: 2025-11-29 15:32:26.439 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:27 compute-0 podman[242422]: 2025-11-29 15:32:27.696492085 +0000 UTC m=+0.134209727 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, version=9.6, managed_by=edpm_ansible, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 29 15:32:29 compute-0 podman[242441]: 2025-11-29 15:32:29.678936935 +0000 UTC m=+0.122192254 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3)
Nov 29 15:32:29 compute-0 podman[203677]: time="2025-11-29T15:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:32:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:32:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Nov 29 15:32:29 compute-0 nova_compute[189485]: 2025-11-29 15:32:29.956 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: ERROR   15:32:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: ERROR   15:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: ERROR   15:32:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: ERROR   15:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: ERROR   15:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:32:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:32:31 compute-0 nova_compute[189485]: 2025-11-29 15:32:31.442 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:33 compute-0 podman[242460]: 2025-11-29 15:32:33.681057525 +0000 UTC m=+0.121645629 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 15:32:34 compute-0 nova_compute[189485]: 2025-11-29 15:32:34.960 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:36 compute-0 nova_compute[189485]: 2025-11-29 15:32:36.444 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:38 compute-0 nova_compute[189485]: 2025-11-29 15:32:38.541 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:38 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:38.544 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:32:38 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:38.546 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:32:39 compute-0 nova_compute[189485]: 2025-11-29 15:32:39.962 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:41 compute-0 nova_compute[189485]: 2025-11-29 15:32:41.447 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.656 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.657 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.683 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.779 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.780 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.792 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.792 189489 INFO nova.compute.claims [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.964 189489 DEBUG nova.compute.provider_tree [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:32:43 compute-0 nova_compute[189485]: 2025-11-29 15:32:43.984 189489 DEBUG nova.scheduler.client.report [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.010 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.230s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.011 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.066 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.067 189489 DEBUG nova.network.neutron [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.085 189489 INFO nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.116 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.197 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.199 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.199 189489 INFO nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Creating image(s)#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.200 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.200 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.201 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.212 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.273 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.274 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a7996d50170914c9415f43103aca35ccc26834bd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.275 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.285 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.347 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.348 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.396 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.397 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.398 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.452 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.454 189489 DEBUG nova.virt.disk.api [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Checking if we can resize image /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.454 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.510 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.511 189489 DEBUG nova.virt.disk.api [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Cannot resize image /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.512 189489 DEBUG nova.objects.instance [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 98515579-e916-472d-99ab-5492cfa34aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.532 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.533 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.533 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.549 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.603 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.605 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.605 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.619 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.691 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.691 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.728 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.728 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.123s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.729 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.786 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.787 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.788 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Ensure instance console log exists: /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.789 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.789 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.790 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:44 compute-0 nova_compute[189485]: 2025-11-29 15:32:44.962 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.090 189489 DEBUG nova.network.neutron [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Successfully updated port: 05839a7c-53a3-4f4b-b076-68284d149a00 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.111 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.112 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquired lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.112 189489 DEBUG nova.network.neutron [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.224 189489 DEBUG nova.compute.manager [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-changed-05839a7c-53a3-4f4b-b076-68284d149a00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.225 189489 DEBUG nova.compute.manager [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Refreshing instance network info cache due to event network-changed-05839a7c-53a3-4f4b-b076-68284d149a00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.225 189489 DEBUG oslo_concurrency.lockutils [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:32:45 compute-0 nova_compute[189485]: 2025-11-29 15:32:45.327 189489 DEBUG nova.network.neutron [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.198 189489 DEBUG nova.network.neutron [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updating instance_info_cache with network_info: [{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.225 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Releasing lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.225 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Instance network_info: |[{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.226 189489 DEBUG oslo_concurrency.lockutils [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.226 189489 DEBUG nova.network.neutron [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Refreshing network info cache for port 05839a7c-53a3-4f4b-b076-68284d149a00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.229 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Start _get_guest_xml network_info=[{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.237 189489 WARNING nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.245 189489 DEBUG nova.virt.libvirt.host [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.246 189489 DEBUG nova.virt.libvirt.host [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.258 189489 DEBUG nova.virt.libvirt.host [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.258 189489 DEBUG nova.virt.libvirt.host [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.259 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.260 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:24:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='34af94d1-a6e1-4bf0-8957-036dc948fe9d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.261 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.261 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.262 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.262 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.262 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.263 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.263 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.264 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.264 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.265 189489 DEBUG nova.virt.hardware [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.270 189489 DEBUG nova.virt.libvirt.vif [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw',id=3,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-gd7j7brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:32:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODg5OTgxMzc4NTg4NDI1MzM1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 29 15:32:46 compute-0 nova_compute[189485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODg5OTgxMzc4NTg4NDI1MzM1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=98515579-e916-472d-99ab-5492cfa34aea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.270 189489 DEBUG nova.network.os_vif_util [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.272 189489 DEBUG nova.network.os_vif_util [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.273 189489 DEBUG nova.objects.instance [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 98515579-e916-472d-99ab-5492cfa34aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.305 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <uuid>98515579-e916-472d-99ab-5492cfa34aea</uuid>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <name>instance-00000003</name>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <memory>524288</memory>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:name>vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw</nova:name>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:32:46</nova:creationTime>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:flavor name="m1.small">
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:memory>512</nova:memory>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:ephemeral>1</nova:ephemeral>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:user uuid="5cbf094e2197487fbe16a0fe6e3076ba">admin</nova:user>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:project uuid="04d676205d9142d19f3d4ce7389f72a2">admin</nova:project>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="a4b79580-904f-4527-8cf1-3888cf1ff785"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        <nova:port uuid="05839a7c-53a3-4f4b-b076-68284d149a00">
Nov 29 15:32:46 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="192.168.0.227" ipVersion="4"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <system>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <entry name="serial">98515579-e916-472d-99ab-5492cfa34aea</entry>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <entry name="uuid">98515579-e916-472d-99ab-5492cfa34aea</entry>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </system>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <os>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </os>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <features>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </features>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <target dev="vdb" bus="virtio"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.config"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:48:4a:52"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <target dev="tap05839a7c-53"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/console.log" append="off"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <video>
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </video>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:32:46 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:32:46 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:32:46 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:32:46 compute-0 nova_compute[189485]: </domain>
Nov 29 15:32:46 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.307 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Preparing to wait for external event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.308 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.309 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.309 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.310 189489 DEBUG nova.virt.libvirt.vif [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw',id=3,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-gd7j7brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:32:44Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODg5OTgxMzc4NTg4NDI1MzM1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 29 15:32:46 compute-0 nova_compute[189485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODg5OTgxMzc4NTg4NDI1MzM1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=98515579-e916-472d-99ab-5492cfa34aea,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.311 189489 DEBUG nova.network.os_vif_util [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.312 189489 DEBUG nova.network.os_vif_util [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.313 189489 DEBUG os_vif [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.314 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.314 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.315 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.319 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.319 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05839a7c-53, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.320 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05839a7c-53, col_values=(('external_ids', {'iface-id': '05839a7c-53a3-4f4b-b076-68284d149a00', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:48:4a:52', 'vm-uuid': '98515579-e916-472d-99ab-5492cfa34aea'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.322 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.325 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:32:46 compute-0 NetworkManager[56360]: <info>  [1764430366.3257] manager: (tap05839a7c-53): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.336 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.337 189489 INFO os_vif [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53')#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.414 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.415 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.415 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.416 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No VIF found with MAC fa:16:3e:48:4a:52, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:32:46 compute-0 nova_compute[189485]: 2025-11-29 15:32:46.416 189489 INFO nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Using config drive#033[00m
Nov 29 15:32:46 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:32:46.270 189489 DEBUG nova.virt.libvirt.vif [None req-ef8f50df-03 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:32:46 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:32:46.310 189489 DEBUG nova.virt.libvirt.vif [None req-ef8f50df-03 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:32:46 compute-0 podman[242511]: 2025-11-29 15:32:46.635987242 +0000 UTC m=+0.081514522 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.550 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:47 compute-0 nova_compute[189485]: 2025-11-29 15:32:47.601 189489 INFO nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Creating config drive at /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.config#033[00m
Nov 29 15:32:47 compute-0 nova_compute[189485]: 2025-11-29 15:32:47.613 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptr14jeqb execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:32:47 compute-0 nova_compute[189485]: 2025-11-29 15:32:47.762 189489 DEBUG oslo_concurrency.processutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptr14jeqb" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:32:47 compute-0 kernel: tap05839a7c-53: entered promiscuous mode
Nov 29 15:32:47 compute-0 NetworkManager[56360]: <info>  [1764430367.8538] manager: (tap05839a7c-53): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Nov 29 15:32:47 compute-0 ovn_controller[97827]: 2025-11-29T15:32:47Z|00040|binding|INFO|Claiming lport 05839a7c-53a3-4f4b-b076-68284d149a00 for this chassis.
Nov 29 15:32:47 compute-0 ovn_controller[97827]: 2025-11-29T15:32:47Z|00041|binding|INFO|05839a7c-53a3-4f4b-b076-68284d149a00: Claiming fa:16:3e:48:4a:52 192.168.0.227
Nov 29 15:32:47 compute-0 nova_compute[189485]: 2025-11-29 15:32:47.857 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.865 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:4a:52 192.168.0.227'], port_security=['fa:16:3e:48:4a:52 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nju3ymh64jso-aat7xqwj3j4y-2ikheen5x3vw-port-q265egptd67m', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '98515579-e916-472d-99ab-5492cfa34aea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nju3ymh64jso-aat7xqwj3j4y-2ikheen5x3vw-port-q265egptd67m', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.177'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=05839a7c-53a3-4f4b-b076-68284d149a00) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.868 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 05839a7c-53a3-4f4b-b076-68284d149a00 in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 bound to our chassis#033[00m
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.889 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:32:47 compute-0 ovn_controller[97827]: 2025-11-29T15:32:47Z|00042|binding|INFO|Setting lport 05839a7c-53a3-4f4b-b076-68284d149a00 ovn-installed in OVS
Nov 29 15:32:47 compute-0 ovn_controller[97827]: 2025-11-29T15:32:47Z|00043|binding|INFO|Setting lport 05839a7c-53a3-4f4b-b076-68284d149a00 up in Southbound
Nov 29 15:32:47 compute-0 nova_compute[189485]: 2025-11-29 15:32:47.906 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.914 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[7f0ba122-8546-48e4-b2e1-a5f46e947f80]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:32:47 compute-0 systemd-machined[155802]: New machine qemu-3-instance-00000003.
Nov 29 15:32:47 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Nov 29 15:32:47 compute-0 systemd-udevd[242558]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:32:47 compute-0 NetworkManager[56360]: <info>  [1764430367.9522] device (tap05839a7c-53): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:32:47 compute-0 NetworkManager[56360]: <info>  [1764430367.9609] device (tap05839a7c-53): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.960 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[56ca56be-df01-48b8-981e-bb9207cfe4d1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:32:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:47.964 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[b49e8bec-dfc8-44d4-a611-f8e238f07bb3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.013 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[b3a3f057-5374-49e2-9289-05a6c39d598b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.037 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[188adba5-d108-4b2e-8f1f-ceb4377f8029]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 44881, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242568, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.061 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8316069b-17e9-4897-9f63-fa1dd942de98]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373741, 'tstamp': 373741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242569, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373746, 'tstamp': 373746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242569, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.063 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.067 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.069 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.069 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.070 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.071 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:32:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:48.072 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.692 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430368.6917202, 98515579-e916-472d-99ab-5492cfa34aea => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.692 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] VM Started (Lifecycle Event)#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.721 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.730 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430368.6919215, 98515579-e916-472d-99ab-5492cfa34aea => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.731 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.756 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.764 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:32:48 compute-0 nova_compute[189485]: 2025-11-29 15:32:48.794 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.143 189489 DEBUG nova.compute.manager [req-b61cfd37-e212-4bfd-ba8a-972f37181f7a req-0b0dc456-1313-4e2e-9ee1-1c33c58dd15b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.144 189489 DEBUG oslo_concurrency.lockutils [req-b61cfd37-e212-4bfd-ba8a-972f37181f7a req-0b0dc456-1313-4e2e-9ee1-1c33c58dd15b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.144 189489 DEBUG oslo_concurrency.lockutils [req-b61cfd37-e212-4bfd-ba8a-972f37181f7a req-0b0dc456-1313-4e2e-9ee1-1c33c58dd15b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.145 189489 DEBUG oslo_concurrency.lockutils [req-b61cfd37-e212-4bfd-ba8a-972f37181f7a req-0b0dc456-1313-4e2e-9ee1-1c33c58dd15b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.146 189489 DEBUG nova.compute.manager [req-b61cfd37-e212-4bfd-ba8a-972f37181f7a req-0b0dc456-1313-4e2e-9ee1-1c33c58dd15b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Processing event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.147 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.161 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.162 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430369.1614192, 98515579-e916-472d-99ab-5492cfa34aea => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.163 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.174 189489 INFO nova.virt.libvirt.driver [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Instance spawned successfully.#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.175 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.194 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.204 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.213 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.214 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.216 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.217 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.218 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.219 189489 DEBUG nova.virt.libvirt.driver [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.227 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.291 189489 INFO nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Took 5.09 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.292 189489 DEBUG nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.368 189489 INFO nova.compute.manager [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Took 5.62 seconds to build instance.#033[00m
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.389 189489 DEBUG oslo_concurrency.lockutils [None req-ef8f50df-0309-4a2f-84de-b665ca5ab752 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.732s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:49 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 15:32:49 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 15:32:49 compute-0 nova_compute[189485]: 2025-11-29 15:32:49.965 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:50 compute-0 nova_compute[189485]: 2025-11-29 15:32:50.431 189489 DEBUG nova.network.neutron [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updated VIF entry in instance network info cache for port 05839a7c-53a3-4f4b-b076-68284d149a00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:32:50 compute-0 nova_compute[189485]: 2025-11-29 15:32:50.432 189489 DEBUG nova.network.neutron [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updating instance_info_cache with network_info: [{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:32:50 compute-0 nova_compute[189485]: 2025-11-29 15:32:50.451 189489 DEBUG oslo_concurrency.lockutils [req-c2e47480-feb8-459e-b7bb-abe7bafcf18e req-4abad990-8aaa-40a5-97a8-c13865f81c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.225 189489 DEBUG nova.compute.manager [req-c31252e7-cddd-4dde-b4c1-475afc2c885d req-09e4d1ee-5a2b-4760-91d1-755c1b640c71 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.227 189489 DEBUG oslo_concurrency.lockutils [req-c31252e7-cddd-4dde-b4c1-475afc2c885d req-09e4d1ee-5a2b-4760-91d1-755c1b640c71 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.227 189489 DEBUG oslo_concurrency.lockutils [req-c31252e7-cddd-4dde-b4c1-475afc2c885d req-09e4d1ee-5a2b-4760-91d1-755c1b640c71 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.228 189489 DEBUG oslo_concurrency.lockutils [req-c31252e7-cddd-4dde-b4c1-475afc2c885d req-09e4d1ee-5a2b-4760-91d1-755c1b640c71 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.228 189489 DEBUG nova.compute.manager [req-c31252e7-cddd-4dde-b4c1-475afc2c885d req-09e4d1ee-5a2b-4760-91d1-755c1b640c71 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] No waiting events found dispatching network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.229 189489 WARNING nova.compute.manager [req-c31252e7-cddd-4dde-b4c1-475afc2c885d req-09e4d1ee-5a2b-4760-91d1-755c1b640c71 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received unexpected event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:32:51 compute-0 nova_compute[189485]: 2025-11-29 15:32:51.323 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:54 compute-0 podman[242598]: 2025-11-29 15:32:54.651550896 +0000 UTC m=+0.105625199 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:32:54 compute-0 nova_compute[189485]: 2025-11-29 15:32:54.967 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:56 compute-0 nova_compute[189485]: 2025-11-29 15:32:56.326 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:32:56 compute-0 podman[242620]: 2025-11-29 15:32:56.675002557 +0000 UTC m=+0.099066041 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent)
Nov 29 15:32:56 compute-0 podman[242619]: 2025-11-29 15:32:56.678153143 +0000 UTC m=+0.118618178 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.tags=base rhel9, version=9.4, name=ubi9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 15:32:56 compute-0 podman[242621]: 2025-11-29 15:32:56.679441648 +0000 UTC m=+0.104433018 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:32:56 compute-0 podman[242622]: 2025-11-29 15:32:56.701702126 +0000 UTC m=+0.126993063 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:32:58 compute-0 podman[242693]: 2025-11-29 15:32:58.658616009 +0000 UTC m=+0.100010897 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Nov 29 15:32:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:59.163 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:32:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:59.163 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:32:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:32:59.164 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:32:59 compute-0 podman[203677]: time="2025-11-29T15:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:32:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:32:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4770 "" "Go-http-client/1.1"
Nov 29 15:32:59 compute-0 nova_compute[189485]: 2025-11-29 15:32:59.968 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:00 compute-0 nova_compute[189485]: 2025-11-29 15:33:00.503 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:00 compute-0 nova_compute[189485]: 2025-11-29 15:33:00.504 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:33:00 compute-0 podman[242713]: 2025-11-29 15:33:00.648775737 +0000 UTC m=+0.104960471 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, org.label-schema.build-date=20251125)
Nov 29 15:33:01 compute-0 nova_compute[189485]: 2025-11-29 15:33:01.330 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: ERROR   15:33:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: ERROR   15:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: ERROR   15:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: ERROR   15:33:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: ERROR   15:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:33:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:33:01 compute-0 nova_compute[189485]: 2025-11-29 15:33:01.598 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:33:01 compute-0 nova_compute[189485]: 2025-11-29 15:33:01.599 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:33:01 compute-0 nova_compute[189485]: 2025-11-29 15:33:01.599 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:33:04 compute-0 podman[242731]: 2025-11-29 15:33:04.666327782 +0000 UTC m=+0.112868204 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.949 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.966 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.967 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.967 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.968 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.969 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.970 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.993 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.994 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.994 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:33:04 compute-0 nova_compute[189485]: 2025-11-29 15:33:04.995 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.083 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.163 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.167 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.228 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.229 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.290 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.291 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.354 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.364 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.436 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.438 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.496 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.497 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.559 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.561 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.641 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.654 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.714 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.717 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.782 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.786 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.854 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.856 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:33:05 compute-0 nova_compute[189485]: 2025-11-29 15:33:05.915 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.304 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.306 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4912MB free_disk=72.36017227172852GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.307 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.308 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.335 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.428 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.429 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.429 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.430 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.430 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.528 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.551 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.574 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:33:06 compute-0 nova_compute[189485]: 2025-11-29 15:33:06.575 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.267s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:33:07 compute-0 nova_compute[189485]: 2025-11-29 15:33:07.090 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:07 compute-0 nova_compute[189485]: 2025-11-29 15:33:07.091 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:07 compute-0 nova_compute[189485]: 2025-11-29 15:33:07.118 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:07 compute-0 nova_compute[189485]: 2025-11-29 15:33:07.119 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:07 compute-0 nova_compute[189485]: 2025-11-29 15:33:07.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:09 compute-0 nova_compute[189485]: 2025-11-29 15:33:09.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:33:09 compute-0 nova_compute[189485]: 2025-11-29 15:33:09.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:33:09 compute-0 nova_compute[189485]: 2025-11-29 15:33:09.972 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:11 compute-0 nova_compute[189485]: 2025-11-29 15:33:11.339 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:14 compute-0 nova_compute[189485]: 2025-11-29 15:33:14.975 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:16 compute-0 nova_compute[189485]: 2025-11-29 15:33:16.343 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:17 compute-0 podman[242791]: 2025-11-29 15:33:17.680520504 +0000 UTC m=+0.118054542 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:33:17 compute-0 ovn_controller[97827]: 2025-11-29T15:33:17Z|00044|memory_trim|INFO|Detected inactivity (last active 30017 ms ago): trimming memory
Nov 29 15:33:19 compute-0 nova_compute[189485]: 2025-11-29 15:33:19.979 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:21 compute-0 nova_compute[189485]: 2025-11-29 15:33:21.348 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:22 compute-0 ovn_controller[97827]: 2025-11-29T15:33:22Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:48:4a:52 192.168.0.227
Nov 29 15:33:22 compute-0 ovn_controller[97827]: 2025-11-29T15:33:22Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:48:4a:52 192.168.0.227
Nov 29 15:33:24 compute-0 nova_compute[189485]: 2025-11-29 15:33:24.983 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:25 compute-0 podman[242831]: 2025-11-29 15:33:25.682623758 +0000 UTC m=+0.126581442 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 15:33:26 compute-0 nova_compute[189485]: 2025-11-29 15:33:26.354 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:27 compute-0 podman[242848]: 2025-11-29 15:33:27.682215708 +0000 UTC m=+0.119291945 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, architecture=x86_64, container_name=kepler, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:33:27 compute-0 podman[242850]: 2025-11-29 15:33:27.699069421 +0000 UTC m=+0.128990606 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 15:33:27 compute-0 podman[242849]: 2025-11-29 15:33:27.704534638 +0000 UTC m=+0.134300089 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:33:27 compute-0 podman[242851]: 2025-11-29 15:33:27.735595432 +0000 UTC m=+0.156999688 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:33:29 compute-0 podman[242923]: 2025-11-29 15:33:29.693554565 +0000 UTC m=+0.126836239 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9)
Nov 29 15:33:29 compute-0 podman[203677]: time="2025-11-29T15:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:33:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:33:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:33:29 compute-0 nova_compute[189485]: 2025-11-29 15:33:29.985 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:31 compute-0 nova_compute[189485]: 2025-11-29 15:33:31.358 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: ERROR   15:33:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: ERROR   15:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: ERROR   15:33:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: ERROR   15:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: ERROR   15:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:33:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:33:31 compute-0 podman[242943]: 2025-11-29 15:33:31.65124464 +0000 UTC m=+0.101506518 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:33:34 compute-0 nova_compute[189485]: 2025-11-29 15:33:34.989 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:35 compute-0 podman[242963]: 2025-11-29 15:33:35.638119991 +0000 UTC m=+0.091066608 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 15:33:36 compute-0 nova_compute[189485]: 2025-11-29 15:33:36.362 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:39 compute-0 nova_compute[189485]: 2025-11-29 15:33:39.991 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:41 compute-0 nova_compute[189485]: 2025-11-29 15:33:41.366 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:44 compute-0 nova_compute[189485]: 2025-11-29 15:33:44.994 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:46 compute-0 nova_compute[189485]: 2025-11-29 15:33:46.369 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:48 compute-0 podman[242987]: 2025-11-29 15:33:48.722608718 +0000 UTC m=+0.155035606 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:33:49 compute-0 nova_compute[189485]: 2025-11-29 15:33:49.996 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:51 compute-0 nova_compute[189485]: 2025-11-29 15:33:51.372 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:55 compute-0 nova_compute[189485]: 2025-11-29 15:33:54.999 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:56 compute-0 nova_compute[189485]: 2025-11-29 15:33:56.375 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:33:56 compute-0 podman[243012]: 2025-11-29 15:33:56.658194415 +0000 UTC m=+0.104266292 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:33:58 compute-0 podman[243030]: 2025-11-29 15:33:58.631634114 +0000 UTC m=+0.077549796 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.buildah.version=1.29.0, vcs-type=git, version=9.4, container_name=kepler, release-0.7.12=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container)
Nov 29 15:33:58 compute-0 podman[243031]: 2025-11-29 15:33:58.662843042 +0000 UTC m=+0.090324128 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:33:58 compute-0 podman[243032]: 2025-11-29 15:33:58.683012423 +0000 UTC m=+0.103922203 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0)
Nov 29 15:33:58 compute-0 podman[243039]: 2025-11-29 15:33:58.720456259 +0000 UTC m=+0.148585132 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:33:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:33:59.165 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:33:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:33:59.165 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:33:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:33:59.166 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:33:59 compute-0 podman[203677]: time="2025-11-29T15:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:33:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:33:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Nov 29 15:34:00 compute-0 nova_compute[189485]: 2025-11-29 15:34:00.002 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:00 compute-0 podman[243107]: 2025-11-29 15:34:00.646124944 +0000 UTC m=+0.087350668 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.050 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.051 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.052 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.052 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.062 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.066 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'name': 'vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.070 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 98515579-e916-472d-99ab-5492cfa34aea from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.072 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/98515579-e916-472d-99ab-5492cfa34aea -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.381 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: ERROR   15:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: ERROR   15:34:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: ERROR   15:34:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: ERROR   15:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: ERROR   15:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:34:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.704 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.705 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.705 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:34:01 compute-0 nova_compute[189485]: 2025-11-29 15:34:01.706 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.892 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Sat, 29 Nov 2025 15:34:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-be5a4a15-3e60-47fe-8627-7180c0ea91ab x-openstack-request-id: req-be5a4a15-3e60-47fe-8627-7180c0ea91ab _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.892 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "98515579-e916-472d-99ab-5492cfa34aea", "name": "vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw", "status": "ACTIVE", "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "user_id": "5cbf094e2197487fbe16a0fe6e3076ba", "metadata": {"metering.server_group": "cf461906-40b9-4ac3-86c2-0d606dd14d99"}, "hostId": "3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17", "image": {"id": "a4b79580-904f-4527-8cf1-3888cf1ff785", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a4b79580-904f-4527-8cf1-3888cf1ff785"}]}, "flavor": {"id": "34af94d1-a6e1-4bf0-8957-036dc948fe9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/34af94d1-a6e1-4bf0-8957-036dc948fe9d"}]}, "created": "2025-11-29T15:32:42Z", "updated": "2025-11-29T15:32:49Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.227", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:48:4a:52"}, {"version": 4, "addr": "192.168.122.177", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:48:4a:52"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/98515579-e916-472d-99ab-5492cfa34aea"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/98515579-e916-472d-99ab-5492cfa34aea"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-29T15:32:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.892 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/98515579-e916-472d-99ab-5492cfa34aea used request id req-be5a4a15-3e60-47fe-8627-7180c0ea91ab request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.893 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98515579-e916-472d-99ab-5492cfa34aea', 'name': 'vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.893 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.893 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.893 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.894 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.894 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:34:01.894024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.898 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.901 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes volume: 4698 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.904 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 98515579-e916-472d-99ab-5492cfa34aea / tap05839a7c-53 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.904 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes volume: 2188 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:34:01.905639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.906 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.906 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.906 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.906 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:34:01.907077) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.931 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.961 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/memory.usage volume: 49.15234375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.991 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.992 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.992 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.992 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes volume: 4933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.993 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes volume: 1528 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:34:01.992310) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.994 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw>]
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.995 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-29T15:34:01.994555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.996 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.996 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.996 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:34:01.996024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.996 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.997 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.997 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.998 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.998 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.999 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:01.999 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:34:01.998176) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.001 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:34:02.000433) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.073 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.073 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.074 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.175 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.176 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.177 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.286 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.287 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.287 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.289 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.290 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.291 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:34:02.290287) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.291 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.291 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.291 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.292 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.292 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.293 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:34:02.293399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.321 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.321 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.322 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.360 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.361 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.361 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.396 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.396 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.397 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.398 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.399 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 38180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:34:02.399084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.399 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/cpu volume: 319650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.400 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/cpu volume: 33380000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.401 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:34:02.401454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.402 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.402 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.403 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 490412710 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.403 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 89716861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.403 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 69907902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.404 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 446638356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.404 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 82659007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.404 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 63931559 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.405 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.405 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.406 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.406 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.406 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.406 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-29T15:34:02.406391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.406 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw>]
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.407 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.407 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.408 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:34:02.407510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.408 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.408 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.409 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.409 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.409 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.410 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.410 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.410 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.411 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.411 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.411 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.412 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.412 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.412 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.412 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.412 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:34:02.412419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.413 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.413 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.413 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.413 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.414 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.414 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.414 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.415 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.415 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.416 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.416 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.416 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.416 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:34:02.416693) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.417 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.417 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.417 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.418 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.418 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.418 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.418 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.419 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.420 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.420 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.421 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:34:02.420907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.421 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.421 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.422 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 1591768972 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.422 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 9381814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.422 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.422 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 861553512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.423 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 8222101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.423 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.424 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.425 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:34:02.425241) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.425 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.426 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.427 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:34:02.427498) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.428 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.428 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.428 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.429 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.429 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.429 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.430 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.430 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.431 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.431 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.432 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:34:02.432345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.433 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.433 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:34:02.434791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.436 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:34:02.435851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.436 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.436 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.437 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.437 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.437 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.438 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.438 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.438 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.439 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.439 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.439 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.440 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.440 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.440 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.440 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:34:02.440465) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.441 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets volume: 33 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.441 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.442 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.442 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:34:02.442767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.443 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.443 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.444 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.444 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.444 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.445 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.445 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:34:02.445003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.445 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.446 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.447 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.448 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:34:02.448076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.448 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.449 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.449 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.450 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.451 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.452 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:34:02.453 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:34:02 compute-0 podman[243129]: 2025-11-29 15:34:02.649033994 +0000 UTC m=+0.098221459 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:34:02 compute-0 nova_compute[189485]: 2025-11-29 15:34:02.990 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:34:03 compute-0 nova_compute[189485]: 2025-11-29 15:34:03.013 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:34:03 compute-0 nova_compute[189485]: 2025-11-29 15:34:03.014 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:34:03 compute-0 nova_compute[189485]: 2025-11-29 15:34:03.015 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.006 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.618 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.619 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.620 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.620 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.719 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.822 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.825 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.886 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.889 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.953 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:05 compute-0 nova_compute[189485]: 2025-11-29 15:34:05.954 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.056 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.066 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.166 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.168 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.235 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.236 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.335 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.336 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.385 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.436 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.445 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.526 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.527 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.636 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.638 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 podman[243174]: 2025-11-29 15:34:06.678232943 +0000 UTC m=+0.123915641 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.706 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.709 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:06 compute-0 nova_compute[189485]: 2025-11-29 15:34:06.810 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.331 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.334 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4836MB free_disk=72.33853149414062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.335 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.335 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.440 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.440 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.441 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.441 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.441 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.540 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.556 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.557 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:34:07 compute-0 nova_compute[189485]: 2025-11-29 15:34:07.558 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.222s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:08 compute-0 nova_compute[189485]: 2025-11-29 15:34:08.558 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:08 compute-0 nova_compute[189485]: 2025-11-29 15:34:08.561 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:08 compute-0 nova_compute[189485]: 2025-11-29 15:34:08.561 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:10 compute-0 nova_compute[189485]: 2025-11-29 15:34:10.009 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:10 compute-0 nova_compute[189485]: 2025-11-29 15:34:10.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:34:10 compute-0 nova_compute[189485]: 2025-11-29 15:34:10.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:34:11 compute-0 nova_compute[189485]: 2025-11-29 15:34:11.389 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:15 compute-0 nova_compute[189485]: 2025-11-29 15:34:15.011 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:16 compute-0 nova_compute[189485]: 2025-11-29 15:34:16.393 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:19 compute-0 podman[243209]: 2025-11-29 15:34:19.684417895 +0000 UTC m=+0.116438984 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:34:20 compute-0 nova_compute[189485]: 2025-11-29 15:34:20.018 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:21 compute-0 nova_compute[189485]: 2025-11-29 15:34:21.398 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:25 compute-0 nova_compute[189485]: 2025-11-29 15:34:25.019 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:26 compute-0 nova_compute[189485]: 2025-11-29 15:34:26.402 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:27 compute-0 podman[243231]: 2025-11-29 15:34:27.675752063 +0000 UTC m=+0.117699908 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:34:29 compute-0 podman[243249]: 2025-11-29 15:34:29.656623595 +0000 UTC m=+0.097037623 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:34:29 compute-0 podman[243247]: 2025-11-29 15:34:29.659624006 +0000 UTC m=+0.104432622 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, release=1214.1726694543, io.openshift.expose-services=)
Nov 29 15:34:29 compute-0 podman[243248]: 2025-11-29 15:34:29.678955364 +0000 UTC m=+0.115911210 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 15:34:29 compute-0 podman[243250]: 2025-11-29 15:34:29.708065925 +0000 UTC m=+0.144570338 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 15:34:29 compute-0 podman[203677]: time="2025-11-29T15:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:34:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:34:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:34:30 compute-0 nova_compute[189485]: 2025-11-29 15:34:30.022 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:31 compute-0 nova_compute[189485]: 2025-11-29 15:34:31.407 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: ERROR   15:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: ERROR   15:34:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: ERROR   15:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: ERROR   15:34:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: ERROR   15:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:34:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:34:31 compute-0 podman[243326]: 2025-11-29 15:34:31.641405093 +0000 UTC m=+0.092863743 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Nov 29 15:34:33 compute-0 podman[243346]: 2025-11-29 15:34:33.697206094 +0000 UTC m=+0.137862359 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:34:35 compute-0 nova_compute[189485]: 2025-11-29 15:34:35.023 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:36 compute-0 nova_compute[189485]: 2025-11-29 15:34:36.411 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:37 compute-0 podman[243366]: 2025-11-29 15:34:37.657926561 +0000 UTC m=+0.089158772 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:34:40 compute-0 nova_compute[189485]: 2025-11-29 15:34:40.025 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:40 compute-0 nova_compute[189485]: 2025-11-29 15:34:40.505 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:40 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:40.507 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:34:40 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:40.507 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:34:41 compute-0 nova_compute[189485]: 2025-11-29 15:34:41.414 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:44 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:44.511 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.028 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.145 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.146 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.169 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.293 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.294 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.313 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.314 189489 INFO nova.compute.claims [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.518 189489 DEBUG nova.compute.provider_tree [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.537 189489 DEBUG nova.scheduler.client.report [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.570 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.276s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.571 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.629 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.629 189489 DEBUG nova.network.neutron [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.650 189489 INFO nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.692 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.784 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.786 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.787 189489 INFO nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Creating image(s)#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.787 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.788 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.789 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.802 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.859 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.860 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a7996d50170914c9415f43103aca35ccc26834bd" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.861 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.872 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.927 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.928 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.966 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd,backing_fmt=raw /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk 1073741824" returned: 0 in 0.038s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.967 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a7996d50170914c9415f43103aca35ccc26834bd" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.106s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:45 compute-0 nova_compute[189485]: 2025-11-29 15:34:45.967 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.022 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.023 189489 DEBUG nova.virt.disk.api [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Checking if we can resize image /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.024 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.083 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.088 189489 DEBUG nova.virt.disk.api [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Cannot resize image /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.090 189489 DEBUG nova.objects.instance [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'migration_context' on Instance uuid dd0fdf5e-41d6-4c60-a546-112da1f37416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.115 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.116 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.118 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.153 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.208 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.210 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.211 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.230 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.294 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.295 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.365 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 1073741824" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.367 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.368 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.418 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.460 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.461 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.463 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Ensure instance console log exists: /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.464 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.465 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:46 compute-0 nova_compute[189485]: 2025-11-29 15:34:46.466 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.761 189489 DEBUG nova.network.neutron [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Successfully updated port: 990859f2-5f64-4a2a-9f1d-694b0d52b155 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.795 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.796 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquired lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.796 189489 DEBUG nova.network.neutron [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.883 189489 DEBUG nova.compute.manager [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-changed-990859f2-5f64-4a2a-9f1d-694b0d52b155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.884 189489 DEBUG nova.compute.manager [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Refreshing instance network info cache due to event network-changed-990859f2-5f64-4a2a-9f1d-694b0d52b155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:34:49 compute-0 nova_compute[189485]: 2025-11-29 15:34:49.884 189489 DEBUG oslo_concurrency.lockutils [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:34:49 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 15:34:50 compute-0 nova_compute[189485]: 2025-11-29 15:34:50.031 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:50 compute-0 podman[243420]: 2025-11-29 15:34:50.056502433 +0000 UTC m=+0.075925467 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:34:50 compute-0 nova_compute[189485]: 2025-11-29 15:34:50.426 189489 DEBUG nova.network.neutron [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:34:51 compute-0 nova_compute[189485]: 2025-11-29 15:34:51.422 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:52 compute-0 nova_compute[189485]: 2025-11-29 15:34:52.975 189489 DEBUG nova.network.neutron [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:34:52 compute-0 nova_compute[189485]: 2025-11-29 15:34:52.997 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Releasing lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:34:52 compute-0 nova_compute[189485]: 2025-11-29 15:34:52.998 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Instance network_info: |[{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:34:52 compute-0 nova_compute[189485]: 2025-11-29 15:34:52.998 189489 DEBUG oslo_concurrency.lockutils [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:34:52 compute-0 nova_compute[189485]: 2025-11-29 15:34:52.999 189489 DEBUG nova.network.neutron [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Refreshing network info cache for port 990859f2-5f64-4a2a-9f1d-694b0d52b155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.002 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Start _get_guest_xml network_info=[{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.013 189489 WARNING nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.020 189489 DEBUG nova.virt.libvirt.host [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.020 189489 DEBUG nova.virt.libvirt.host [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.028 189489 DEBUG nova.virt.libvirt.host [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.029 189489 DEBUG nova.virt.libvirt.host [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.029 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.029 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:24:56Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='34af94d1-a6e1-4bf0-8957-036dc948fe9d',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:24:51Z,direct_url=<?>,disk_format='qcow2',id=a4b79580-904f-4527-8cf1-3888cf1ff785,min_disk=0,min_ram=0,name='cirros',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:24:52Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.030 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.030 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.031 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.031 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.031 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.031 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.032 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.032 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.032 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.033 189489 DEBUG nova.virt.hardware [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.037 189489 DEBUG nova.virt.libvirt.vif [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:34:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me',id=4,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-saogslav',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:34:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzU3NjQ5NjExNDQzNDM5MDQzOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Nov 29 15:34:53 compute-0 nova_compute[189485]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzU3NjQ5NjExNDQzNDM5MDQzOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=dd0fdf5e-41d6-4c60-a546-112da1f37416,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.037 189489 DEBUG nova.network.os_vif_util [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.038 189489 DEBUG nova.network.os_vif_util [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.039 189489 DEBUG nova.objects.instance [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid dd0fdf5e-41d6-4c60-a546-112da1f37416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.063 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <uuid>dd0fdf5e-41d6-4c60-a546-112da1f37416</uuid>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <name>instance-00000004</name>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <memory>524288</memory>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:name>vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me</nova:name>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:34:53</nova:creationTime>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:flavor name="m1.small">
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:memory>512</nova:memory>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:ephemeral>1</nova:ephemeral>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:user uuid="5cbf094e2197487fbe16a0fe6e3076ba">admin</nova:user>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:project uuid="04d676205d9142d19f3d4ce7389f72a2">admin</nova:project>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="a4b79580-904f-4527-8cf1-3888cf1ff785"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        <nova:port uuid="990859f2-5f64-4a2a-9f1d-694b0d52b155">
Nov 29 15:34:53 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="192.168.0.225" ipVersion="4"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <system>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <entry name="serial">dd0fdf5e-41d6-4c60-a546-112da1f37416</entry>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <entry name="uuid">dd0fdf5e-41d6-4c60-a546-112da1f37416</entry>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </system>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <os>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </os>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <features>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </features>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <target dev="vdb" bus="virtio"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.config"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:96:c1:c2"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <target dev="tap990859f2-5f"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/console.log" append="off"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <video>
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </video>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:34:53 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:34:53 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:34:53 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:34:53 compute-0 nova_compute[189485]: </domain>
Nov 29 15:34:53 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.064 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Preparing to wait for external event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.064 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.064 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.065 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.065 189489 DEBUG nova.virt.libvirt.vif [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:34:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me',id=4,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-saogslav',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:34:45Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzU3NjQ5NjExNDQzNDM5MDQzOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Nov 29 15:34:53 compute-0 nova_compute[189485]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzU3NjQ5NjExNDQzNDM5MDQzOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=dd0fdf5e-41d6-4c60-a546-112da1f37416,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.066 189489 DEBUG nova.network.os_vif_util [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.066 189489 DEBUG nova.network.os_vif_util [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.067 189489 DEBUG os_vif [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.067 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.068 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.068 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.073 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.073 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap990859f2-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.074 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap990859f2-5f, col_values=(('external_ids', {'iface-id': '990859f2-5f64-4a2a-9f1d-694b0d52b155', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:96:c1:c2', 'vm-uuid': 'dd0fdf5e-41d6-4c60-a546-112da1f37416'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:53 compute-0 NetworkManager[56360]: <info>  [1764430493.0785] manager: (tap990859f2-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.078 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.083 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.084 189489 INFO os_vif [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f')#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.179 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.180 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.180 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.181 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No VIF found with MAC fa:16:3e:96:c1:c2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.181 189489 INFO nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Using config drive#033[00m
Nov 29 15:34:53 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:34:53.037 189489 DEBUG nova.virt.libvirt.vif [None req-6c460f5d-6c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:34:53 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:34:53.065 189489 DEBUG nova.virt.libvirt.vif [None req-6c460f5d-6c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.751 189489 INFO nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Creating config drive at /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.config#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.756 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar07n9rm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:34:53 compute-0 nova_compute[189485]: 2025-11-29 15:34:53.888 189489 DEBUG oslo_concurrency.processutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpar07n9rm" returned: 0 in 0.132s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:34:54 compute-0 kernel: tap990859f2-5f: entered promiscuous mode
Nov 29 15:34:54 compute-0 NetworkManager[56360]: <info>  [1764430494.0007] manager: (tap990859f2-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.003 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:54 compute-0 ovn_controller[97827]: 2025-11-29T15:34:54Z|00045|binding|INFO|Claiming lport 990859f2-5f64-4a2a-9f1d-694b0d52b155 for this chassis.
Nov 29 15:34:54 compute-0 ovn_controller[97827]: 2025-11-29T15:34:54Z|00046|binding|INFO|990859f2-5f64-4a2a-9f1d-694b0d52b155: Claiming fa:16:3e:96:c1:c2 192.168.0.225
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.016 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:c1:c2 192.168.0.225'], port_security=['fa:16:3e:96:c1:c2 192.168.0.225'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nju3ymh64jso-he4f6zydsa2j-l6hxu724o2mv-port-fyvusaifittf', 'neutron:cidrs': '192.168.0.225/24', 'neutron:device_id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nju3ymh64jso-he4f6zydsa2j-l6hxu724o2mv-port-fyvusaifittf', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.224'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=990859f2-5f64-4a2a-9f1d-694b0d52b155) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.018 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 990859f2-5f64-4a2a-9f1d-694b0d52b155 in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 bound to our chassis#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.019 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.034 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:54 compute-0 ovn_controller[97827]: 2025-11-29T15:34:54Z|00047|binding|INFO|Setting lport 990859f2-5f64-4a2a-9f1d-694b0d52b155 ovn-installed in OVS
Nov 29 15:34:54 compute-0 ovn_controller[97827]: 2025-11-29T15:34:54Z|00048|binding|INFO|Setting lport 990859f2-5f64-4a2a-9f1d-694b0d52b155 up in Southbound
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.039 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.045 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[011ef04a-9989-4e91-98ce-17bae733ac5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:34:54 compute-0 systemd-machined[155802]: New machine qemu-4-instance-00000004.
Nov 29 15:34:54 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Nov 29 15:34:54 compute-0 systemd-udevd[243467]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.076 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[6c51a4fe-b1a1-4fd9-95e6-34b50bc267c8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.080 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[72b9e74e-02c2-476e-9797-616d95f73116]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:34:54 compute-0 NetworkManager[56360]: <info>  [1764430494.0950] device (tap990859f2-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:34:54 compute-0 NetworkManager[56360]: <info>  [1764430494.1002] device (tap990859f2-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.108 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[df48cf40-5e80-4c2b-bf22-0f2ef7102a05]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.128 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[7316b9f8-5ac2-4c5b-b95c-cfa135c24729]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 41387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243475, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.148 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[591c26c5-72ce-4eb0-9b86-e494c06e1408]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373741, 'tstamp': 373741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243479, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373746, 'tstamp': 373746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243479, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.149 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.153 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.153 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.153 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:34:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:54.153 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.153 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.597 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430494.5969143, dd0fdf5e-41d6-4c60-a546-112da1f37416 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.598 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] VM Started (Lifecycle Event)#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.620 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.629 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430494.5970602, dd0fdf5e-41d6-4c60-a546-112da1f37416 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.629 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.647 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.655 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.675 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.961 189489 DEBUG nova.compute.manager [req-7fd12696-7ad4-4f17-9b17-977266477c88 req-c48ef7c5-1fc7-46e1-b058-1823548f9c22 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.962 189489 DEBUG oslo_concurrency.lockutils [req-7fd12696-7ad4-4f17-9b17-977266477c88 req-c48ef7c5-1fc7-46e1-b058-1823548f9c22 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.962 189489 DEBUG oslo_concurrency.lockutils [req-7fd12696-7ad4-4f17-9b17-977266477c88 req-c48ef7c5-1fc7-46e1-b058-1823548f9c22 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.962 189489 DEBUG oslo_concurrency.lockutils [req-7fd12696-7ad4-4f17-9b17-977266477c88 req-c48ef7c5-1fc7-46e1-b058-1823548f9c22 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.963 189489 DEBUG nova.compute.manager [req-7fd12696-7ad4-4f17-9b17-977266477c88 req-c48ef7c5-1fc7-46e1-b058-1823548f9c22 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Processing event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.965 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.973 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430494.9730804, dd0fdf5e-41d6-4c60-a546-112da1f37416 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.974 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.976 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.982 189489 INFO nova.virt.libvirt.driver [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Instance spawned successfully.#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.983 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:34:54 compute-0 nova_compute[189485]: 2025-11-29 15:34:54.999 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.009 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.017 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.018 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.018 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.019 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.020 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.020 189489 DEBUG nova.virt.libvirt.driver [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.036 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.056 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.089 189489 INFO nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Took 9.30 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.090 189489 DEBUG nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.131 189489 DEBUG nova.network.neutron [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updated VIF entry in instance network info cache for port 990859f2-5f64-4a2a-9f1d-694b0d52b155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.131 189489 DEBUG nova.network.neutron [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.148 189489 DEBUG oslo_concurrency.lockutils [req-ef26a914-5d2d-4ae2-ba6d-a2a80e0cdebd req-998b99c7-2c6b-4155-b30f-44b684e7bdea 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.161 189489 INFO nova.compute.manager [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Took 9.92 seconds to build instance.#033[00m
Nov 29 15:34:55 compute-0 nova_compute[189485]: 2025-11-29 15:34:55.185 189489 DEBUG oslo_concurrency.lockutils [None req-6c460f5d-6cad-42f0-a446-c67e8c3059c0 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:55 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 15:34:55 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 15:34:57 compute-0 nova_compute[189485]: 2025-11-29 15:34:57.084 189489 DEBUG nova.compute.manager [req-6efa1a89-5ac4-4f10-b1a1-ca9f88b54faa req-8e8948fb-4878-4252-98a7-974dd50838ad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:34:57 compute-0 nova_compute[189485]: 2025-11-29 15:34:57.085 189489 DEBUG oslo_concurrency.lockutils [req-6efa1a89-5ac4-4f10-b1a1-ca9f88b54faa req-8e8948fb-4878-4252-98a7-974dd50838ad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:57 compute-0 nova_compute[189485]: 2025-11-29 15:34:57.085 189489 DEBUG oslo_concurrency.lockutils [req-6efa1a89-5ac4-4f10-b1a1-ca9f88b54faa req-8e8948fb-4878-4252-98a7-974dd50838ad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:57 compute-0 nova_compute[189485]: 2025-11-29 15:34:57.085 189489 DEBUG oslo_concurrency.lockutils [req-6efa1a89-5ac4-4f10-b1a1-ca9f88b54faa req-8e8948fb-4878-4252-98a7-974dd50838ad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:57 compute-0 nova_compute[189485]: 2025-11-29 15:34:57.085 189489 DEBUG nova.compute.manager [req-6efa1a89-5ac4-4f10-b1a1-ca9f88b54faa req-8e8948fb-4878-4252-98a7-974dd50838ad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] No waiting events found dispatching network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:34:57 compute-0 nova_compute[189485]: 2025-11-29 15:34:57.086 189489 WARNING nova.compute.manager [req-6efa1a89-5ac4-4f10-b1a1-ca9f88b54faa req-8e8948fb-4878-4252-98a7-974dd50838ad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received unexpected event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:34:58 compute-0 nova_compute[189485]: 2025-11-29 15:34:58.077 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:34:58 compute-0 podman[243508]: 2025-11-29 15:34:58.686958534 +0000 UTC m=+0.129103834 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 15:34:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:59.166 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:34:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:59.167 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:34:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:34:59.167 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:34:59 compute-0 podman[203677]: time="2025-11-29T15:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:34:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:34:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:35:00 compute-0 nova_compute[189485]: 2025-11-29 15:35:00.037 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:00 compute-0 podman[243529]: 2025-11-29 15:35:00.668418882 +0000 UTC m=+0.102837999 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 15:35:00 compute-0 podman[243528]: 2025-11-29 15:35:00.669618414 +0000 UTC m=+0.098856583 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, release=1214.1726694543, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 29 15:35:00 compute-0 podman[243530]: 2025-11-29 15:35:00.677790093 +0000 UTC m=+0.108083019 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm)
Nov 29 15:35:00 compute-0 podman[243531]: 2025-11-29 15:35:00.711033215 +0000 UTC m=+0.138323501 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: ERROR   15:35:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: ERROR   15:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: ERROR   15:35:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: ERROR   15:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: ERROR   15:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:35:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:35:02 compute-0 nova_compute[189485]: 2025-11-29 15:35:02.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:02 compute-0 nova_compute[189485]: 2025-11-29 15:35:02.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:35:02 compute-0 podman[243608]: 2025-11-29 15:35:02.675283571 +0000 UTC m=+0.131730445 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:35:03 compute-0 nova_compute[189485]: 2025-11-29 15:35:03.079 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:03 compute-0 nova_compute[189485]: 2025-11-29 15:35:03.591 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:35:03 compute-0 nova_compute[189485]: 2025-11-29 15:35:03.592 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:35:03 compute-0 nova_compute[189485]: 2025-11-29 15:35:03.592 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:35:04 compute-0 podman[243629]: 2025-11-29 15:35:04.624198906 +0000 UTC m=+0.079393630 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 15:35:05 compute-0 nova_compute[189485]: 2025-11-29 15:35:05.040 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:06 compute-0 nova_compute[189485]: 2025-11-29 15:35:06.782 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:35:06 compute-0 nova_compute[189485]: 2025-11-29 15:35:06.833 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:35:06 compute-0 nova_compute[189485]: 2025-11-29 15:35:06.833 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:35:06 compute-0 nova_compute[189485]: 2025-11-29 15:35:06.834 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:06 compute-0 nova_compute[189485]: 2025-11-29 15:35:06.834 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:06 compute-0 nova_compute[189485]: 2025-11-29 15:35:06.835 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.522 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.548 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.550 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.550 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.551 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.682 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.748 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.750 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.813 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.815 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.914 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:07 compute-0 nova_compute[189485]: 2025-11-29 15:35:07.916 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.004 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.018 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.083 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.118 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.120 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.184 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.186 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.250 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.252 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.347 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.359 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.440 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.442 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.527 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.528 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.588 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.590 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 podman[243679]: 2025-11-29 15:35:08.64481557 +0000 UTC m=+0.096438618 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.700 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.711 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.783 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.784 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.843 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.845 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.940 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:08 compute-0 nova_compute[189485]: 2025-11-29 15:35:08.942 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.004 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.379 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.381 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4635MB free_disk=72.33751678466797GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.382 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.382 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.485 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.486 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.486 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.486 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.487 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.487 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.588 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.605 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.634 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:35:09 compute-0 nova_compute[189485]: 2025-11-29 15:35:09.636 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.254s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:35:10 compute-0 nova_compute[189485]: 2025-11-29 15:35:10.042 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:10 compute-0 nova_compute[189485]: 2025-11-29 15:35:10.599 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:10 compute-0 nova_compute[189485]: 2025-11-29 15:35:10.600 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:10 compute-0 nova_compute[189485]: 2025-11-29 15:35:10.601 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:35:10 compute-0 nova_compute[189485]: 2025-11-29 15:35:10.602 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:35:13 compute-0 nova_compute[189485]: 2025-11-29 15:35:13.089 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:15 compute-0 nova_compute[189485]: 2025-11-29 15:35:15.044 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:18 compute-0 nova_compute[189485]: 2025-11-29 15:35:18.094 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:20 compute-0 nova_compute[189485]: 2025-11-29 15:35:20.046 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:20 compute-0 podman[243722]: 2025-11-29 15:35:20.612976067 +0000 UTC m=+0.069513046 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:35:23 compute-0 nova_compute[189485]: 2025-11-29 15:35:23.098 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:24 compute-0 ovn_controller[97827]: 2025-11-29T15:35:24Z|00049|memory_trim|INFO|Detected inactivity (last active 30023 ms ago): trimming memory
Nov 29 15:35:25 compute-0 nova_compute[189485]: 2025-11-29 15:35:25.048 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:26 compute-0 ovn_controller[97827]: 2025-11-29T15:35:26Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:96:c1:c2 192.168.0.225
Nov 29 15:35:26 compute-0 ovn_controller[97827]: 2025-11-29T15:35:26Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:96:c1:c2 192.168.0.225
Nov 29 15:35:28 compute-0 nova_compute[189485]: 2025-11-29 15:35:28.102 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:29 compute-0 podman[243756]: 2025-11-29 15:35:29.714898463 +0000 UTC m=+0.155379968 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:35:29 compute-0 podman[203677]: time="2025-11-29T15:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:35:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:35:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:35:30 compute-0 nova_compute[189485]: 2025-11-29 15:35:30.051 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: ERROR   15:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: ERROR   15:35:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: ERROR   15:35:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: ERROR   15:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: ERROR   15:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:35:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:35:31 compute-0 podman[243776]: 2025-11-29 15:35:31.668428773 +0000 UTC m=+0.113889017 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, container_name=kepler, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 15:35:31 compute-0 podman[243778]: 2025-11-29 15:35:31.675023419 +0000 UTC m=+0.100608969 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:35:31 compute-0 podman[243777]: 2025-11-29 15:35:31.675961755 +0000 UTC m=+0.108784740 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 15:35:31 compute-0 podman[243779]: 2025-11-29 15:35:31.696137395 +0000 UTC m=+0.132696389 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:35:33 compute-0 nova_compute[189485]: 2025-11-29 15:35:33.106 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:33 compute-0 podman[243853]: 2025-11-29 15:35:33.689257396 +0000 UTC m=+0.126200836 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=edpm, name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, distribution-scope=public)
Nov 29 15:35:35 compute-0 nova_compute[189485]: 2025-11-29 15:35:35.055 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:35 compute-0 podman[243874]: 2025-11-29 15:35:35.678134513 +0000 UTC m=+0.122365123 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:35:38 compute-0 nova_compute[189485]: 2025-11-29 15:35:38.108 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:39 compute-0 podman[243894]: 2025-11-29 15:35:39.658193138 +0000 UTC m=+0.091940527 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:35:40 compute-0 nova_compute[189485]: 2025-11-29 15:35:40.058 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:43 compute-0 nova_compute[189485]: 2025-11-29 15:35:43.110 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:45 compute-0 nova_compute[189485]: 2025-11-29 15:35:45.061 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:48 compute-0 nova_compute[189485]: 2025-11-29 15:35:48.113 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:50 compute-0 nova_compute[189485]: 2025-11-29 15:35:50.066 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:51 compute-0 podman[243918]: 2025-11-29 15:35:51.678795773 +0000 UTC m=+0.109508418 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:35:53 compute-0 nova_compute[189485]: 2025-11-29 15:35:53.117 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:55 compute-0 nova_compute[189485]: 2025-11-29 15:35:55.068 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:58 compute-0 nova_compute[189485]: 2025-11-29 15:35:58.121 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:35:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:35:59.167 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:35:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:35:59.167 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:35:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:35:59.169 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:35:59 compute-0 podman[203677]: time="2025-11-29T15:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:35:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:35:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Nov 29 15:36:00 compute-0 nova_compute[189485]: 2025-11-29 15:36:00.070 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:00 compute-0 podman[243941]: 2025-11-29 15:36:00.711813682 +0000 UTC m=+0.149466460 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm)
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.051 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.053 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.054 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.061 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance dd0fdf5e-41d6-4c60-a546-112da1f37416 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.063 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/dd0fdf5e-41d6-4c60-a546-112da1f37416 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: ERROR   15:36:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: ERROR   15:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: ERROR   15:36:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: ERROR   15:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: ERROR   15:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:36:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.958 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Sat, 29 Nov 2025 15:36:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-9e014388-4ae1-426b-a2c7-54695e32eb46 x-openstack-request-id: req-9e014388-4ae1-426b-a2c7-54695e32eb46 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.959 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "dd0fdf5e-41d6-4c60-a546-112da1f37416", "name": "vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me", "status": "ACTIVE", "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "user_id": "5cbf094e2197487fbe16a0fe6e3076ba", "metadata": {"metering.server_group": "cf461906-40b9-4ac3-86c2-0d606dd14d99"}, "hostId": "3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17", "image": {"id": "a4b79580-904f-4527-8cf1-3888cf1ff785", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/a4b79580-904f-4527-8cf1-3888cf1ff785"}]}, "flavor": {"id": "34af94d1-a6e1-4bf0-8957-036dc948fe9d", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/34af94d1-a6e1-4bf0-8957-036dc948fe9d"}]}, "created": "2025-11-29T15:34:43Z", "updated": "2025-11-29T15:34:55Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.225", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:c1:c2"}, {"version": 4, "addr": "192.168.122.224", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:96:c1:c2"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/dd0fdf5e-41d6-4c60-a546-112da1f37416"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/dd0fdf5e-41d6-4c60-a546-112da1f37416"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-29T15:34:55.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.959 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/dd0fdf5e-41d6-4c60-a546-112da1f37416 used request id req-9e014388-4ae1-426b-a2c7-54695e32eb46 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.960 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'name': 'vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.964 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.968 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'name': 'vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.972 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98515579-e916-472d-99ab-5492cfa34aea', 'name': 'vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.972 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.972 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.974 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:36:01.972898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.978 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for dd0fdf5e-41d6-4c60-a546-112da1f37416 / tap990859f2-5f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.978 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes volume: 2146 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.982 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.986 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes volume: 7196 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.989 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.990 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.991 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.991 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes.delta volume: 2498 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.991 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.992 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.992 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:36:01.990809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:01.993 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:36:01.992934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.024 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/memory.usage volume: 49.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.050 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.074 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.098 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.099 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.099 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.100 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.100 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:36:02.100797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.102 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.103 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.103 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.104 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.105 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-29T15:36:02.106153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.107 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.107 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me>]
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.109 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:36:02.109572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.110 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.111 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.111 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes.delta volume: 3389 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.112 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.113 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.114 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:36:02.114019) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.115 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.115 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.116 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets volume: 60 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.116 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:36:02.118774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.210 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.211 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.211 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.299 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.300 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.301 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.397 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.398 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.398 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.470 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.471 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.471 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.472 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.473 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.473 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.474 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:36:02.473905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.474 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.475 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.475 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.475 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.476 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.476 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.476 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.476 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:36:02.476722) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 nova_compute[189485]: 2025-11-29 15:36:02.487 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:02 compute-0 nova_compute[189485]: 2025-11-29 15:36:02.488 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.500 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.500 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.501 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.524 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.524 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.525 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.551 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.552 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.552 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.582 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.582 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.582 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.583 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.584 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/cpu volume: 31020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:36:02.583977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.584 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 39880000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.584 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/cpu volume: 370010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.584 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/cpu volume: 35090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:36:02.585505) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.585 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 489570269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.586 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 78552201 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.586 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 63090868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.586 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.586 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.586 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.586 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 490412710 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.587 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 89716861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.587 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 69907902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.587 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 446638356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.587 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 82659007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.587 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 63931559 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.588 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me>]
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-29T15:36:02.588708) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.589 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.590 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:36:02.589612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.590 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.590 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.590 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.591 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.591 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.591 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.591 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.591 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.592 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.592 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.592 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.592 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.592 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.593 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:36:02.592957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.593 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.593 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.593 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.594 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.594 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.594 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.594 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.594 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.595 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.595 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.595 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.595 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.596 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.597 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.597 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:36:02.596447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.597 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.597 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.597 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.597 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.598 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.598 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.598 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.598 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.598 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.599 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:36:02.599547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.600 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 1406170011 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.600 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 9552907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.600 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.600 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.600 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.600 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.601 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 1597389173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.601 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 9381814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.601 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.601 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 861553512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.601 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 8222101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.602 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.603 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:36:02.602849) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.603 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.603 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:36:02.604522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.604 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.605 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.605 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.605 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.605 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.605 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.606 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.606 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.606 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.606 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.606 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:36:02.607612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.607 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.611 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.611 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.612 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.612 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.613 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.613 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.613 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.613 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.614 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:36:02.613983) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.614 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.615 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.615 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.616 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.616 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.616 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.616 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.617 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.617 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.617 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.618 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.618 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:36:02.616448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.619 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.619 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.620 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.620 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.621 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.621 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.621 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.621 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:36:02.622402) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.622 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.623 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.623 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.623 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.623 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.623 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.624 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.624 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.624 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.624 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.624 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.624 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.625 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.626 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.627 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.627 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.627 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.628 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:36:02.624151) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.629 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:36:02.625225) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.630 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:36:02.626757) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:36:02.630 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:36:02 compute-0 podman[243963]: 2025-11-29 15:36:02.668165377 +0000 UTC m=+0.105983405 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 15:36:02 compute-0 podman[243961]: 2025-11-29 15:36:02.679839049 +0000 UTC m=+0.127618434 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, io.openshift.expose-services=)
Nov 29 15:36:02 compute-0 podman[243962]: 2025-11-29 15:36:02.687151776 +0000 UTC m=+0.119601399 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:36:02 compute-0 podman[243969]: 2025-11-29 15:36:02.713225875 +0000 UTC m=+0.142004860 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:36:02 compute-0 nova_compute[189485]: 2025-11-29 15:36:02.760 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:36:02 compute-0 nova_compute[189485]: 2025-11-29 15:36:02.760 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:36:02 compute-0 nova_compute[189485]: 2025-11-29 15:36:02.760 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:36:03 compute-0 nova_compute[189485]: 2025-11-29 15:36:03.124 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:03 compute-0 nova_compute[189485]: 2025-11-29 15:36:03.768 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updating instance_info_cache with network_info: [{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:36:03 compute-0 nova_compute[189485]: 2025-11-29 15:36:03.795 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:36:03 compute-0 nova_compute[189485]: 2025-11-29 15:36:03.795 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:36:03 compute-0 nova_compute[189485]: 2025-11-29 15:36:03.795 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:04 compute-0 podman[244040]: 2025-11-29 15:36:04.643009368 +0000 UTC m=+0.092551135 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=edpm, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, maintainer=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Nov 29 15:36:05 compute-0 nova_compute[189485]: 2025-11-29 15:36:05.073 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:05 compute-0 nova_compute[189485]: 2025-11-29 15:36:05.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:06 compute-0 podman[244060]: 2025-11-29 15:36:06.693290361 +0000 UTC m=+0.120241506 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.511 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.512 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.513 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.513 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.661 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.748 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.751 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.816 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.819 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.918 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.919 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.976 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:07 compute-0 nova_compute[189485]: 2025-11-29 15:36:07.984 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.044 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.045 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.106 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.107 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.126 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.170 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.171 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.268 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.276 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.370 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.374 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.438 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.440 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.521 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.523 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.586 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.596 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.661 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.663 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.766 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.768 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.831 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.834 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:36:08 compute-0 nova_compute[189485]: 2025-11-29 15:36:08.895 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.370 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.371 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4600MB free_disk=72.31591796875GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.372 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.372 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.801 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.802 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.802 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.803 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.803 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.804 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:36:09 compute-0 podman[244130]: 2025-11-29 15:36:09.880364646 +0000 UTC m=+0.081233879 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:36:09 compute-0 nova_compute[189485]: 2025-11-29 15:36:09.902 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:36:10 compute-0 nova_compute[189485]: 2025-11-29 15:36:10.075 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:10 compute-0 nova_compute[189485]: 2025-11-29 15:36:10.322 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:36:10 compute-0 nova_compute[189485]: 2025-11-29 15:36:10.326 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:36:10 compute-0 nova_compute[189485]: 2025-11-29 15:36:10.327 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.955s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:36:11 compute-0 nova_compute[189485]: 2025-11-29 15:36:11.328 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:11 compute-0 nova_compute[189485]: 2025-11-29 15:36:11.328 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:12 compute-0 nova_compute[189485]: 2025-11-29 15:36:12.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:36:12 compute-0 nova_compute[189485]: 2025-11-29 15:36:12.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:36:13 compute-0 nova_compute[189485]: 2025-11-29 15:36:13.130 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:15 compute-0 nova_compute[189485]: 2025-11-29 15:36:15.078 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:18 compute-0 nova_compute[189485]: 2025-11-29 15:36:18.133 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:20 compute-0 nova_compute[189485]: 2025-11-29 15:36:20.080 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:22 compute-0 podman[244155]: 2025-11-29 15:36:22.682787591 +0000 UTC m=+0.120884644 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:36:23 compute-0 nova_compute[189485]: 2025-11-29 15:36:23.136 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:25 compute-0 nova_compute[189485]: 2025-11-29 15:36:25.082 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:28 compute-0 nova_compute[189485]: 2025-11-29 15:36:28.140 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:29 compute-0 podman[203677]: time="2025-11-29T15:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:36:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:36:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Nov 29 15:36:30 compute-0 nova_compute[189485]: 2025-11-29 15:36:30.086 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: ERROR   15:36:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: ERROR   15:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: ERROR   15:36:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: ERROR   15:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: ERROR   15:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:36:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:36:31 compute-0 podman[244178]: 2025-11-29 15:36:31.695267171 +0000 UTC m=+0.134674904 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute)
Nov 29 15:36:33 compute-0 nova_compute[189485]: 2025-11-29 15:36:33.144 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:33 compute-0 podman[244196]: 2025-11-29 15:36:33.657089272 +0000 UTC m=+0.101536025 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.openshift.tags=base rhel9, container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., vcs-type=git, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, release=1214.1726694543, config_id=edpm, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release-0.7.12=)
Nov 29 15:36:33 compute-0 podman[244198]: 2025-11-29 15:36:33.680188361 +0000 UTC m=+0.107364510 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:36:33 compute-0 podman[244197]: 2025-11-29 15:36:33.690399196 +0000 UTC m=+0.118132860 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:36:33 compute-0 podman[244203]: 2025-11-29 15:36:33.715289753 +0000 UTC m=+0.142861743 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0)
Nov 29 15:36:35 compute-0 nova_compute[189485]: 2025-11-29 15:36:35.089 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:35 compute-0 podman[244276]: 2025-11-29 15:36:35.636526405 +0000 UTC m=+0.075978708 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal)
Nov 29 15:36:37 compute-0 podman[244297]: 2025-11-29 15:36:37.673555904 +0000 UTC m=+0.114276236 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Nov 29 15:36:38 compute-0 nova_compute[189485]: 2025-11-29 15:36:38.147 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:40 compute-0 nova_compute[189485]: 2025-11-29 15:36:40.092 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:40 compute-0 podman[244317]: 2025-11-29 15:36:40.670271854 +0000 UTC m=+0.105086489 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:36:43 compute-0 nova_compute[189485]: 2025-11-29 15:36:43.151 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:45 compute-0 nova_compute[189485]: 2025-11-29 15:36:45.095 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:48 compute-0 nova_compute[189485]: 2025-11-29 15:36:48.154 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:50 compute-0 nova_compute[189485]: 2025-11-29 15:36:50.100 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:53 compute-0 nova_compute[189485]: 2025-11-29 15:36:53.159 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:53 compute-0 podman[244340]: 2025-11-29 15:36:53.685932948 +0000 UTC m=+0.121354156 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:36:55 compute-0 nova_compute[189485]: 2025-11-29 15:36:55.102 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:55 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 15:36:58 compute-0 nova_compute[189485]: 2025-11-29 15:36:58.163 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:36:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:36:59.168 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:36:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:36:59.169 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:36:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:36:59.171 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:36:59 compute-0 podman[203677]: time="2025-11-29T15:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:36:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:36:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 29 15:37:00 compute-0 nova_compute[189485]: 2025-11-29 15:37:00.104 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: ERROR   15:37:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: ERROR   15:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: ERROR   15:37:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: ERROR   15:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: ERROR   15:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:37:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:37:02 compute-0 podman[244364]: 2025-11-29 15:37:02.692118027 +0000 UTC m=+0.128077957 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.167 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.787 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.788 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.788 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:37:03 compute-0 nova_compute[189485]: 2025-11-29 15:37:03.789 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:37:04 compute-0 podman[244385]: 2025-11-29 15:37:04.673860722 +0000 UTC m=+0.117774030 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 15:37:04 compute-0 podman[244386]: 2025-11-29 15:37:04.676223446 +0000 UTC m=+0.103006974 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Nov 29 15:37:04 compute-0 podman[244384]: 2025-11-29 15:37:04.68007684 +0000 UTC m=+0.118944562 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, container_name=kepler, release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 29 15:37:04 compute-0 podman[244387]: 2025-11-29 15:37:04.717489422 +0000 UTC m=+0.142281416 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:37:05 compute-0 nova_compute[189485]: 2025-11-29 15:37:05.108 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:06 compute-0 podman[244465]: 2025-11-29 15:37:06.679397747 +0000 UTC m=+0.116854336 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.843 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.864 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.864 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.864 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.864 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.865 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.865 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:37:06 compute-0 nova_compute[189485]: 2025-11-29 15:37:06.880 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:37:08 compute-0 nova_compute[189485]: 2025-11-29 15:37:08.171 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:08 compute-0 nova_compute[189485]: 2025-11-29 15:37:08.500 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:08 compute-0 nova_compute[189485]: 2025-11-29 15:37:08.501 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:08 compute-0 podman[244484]: 2025-11-29 15:37:08.652493721 +0000 UTC m=+0.091880296 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.223 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.266 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.266 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid 940da983-04c4-46c2-8cd4-96ce0736a67e _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.267 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid 98515579-e916-472d-99ab-5492cfa34aea _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.267 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid dd0fdf5e-41d6-4c60-a546-112da1f37416 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.268 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.268 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.269 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.270 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.271 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.271 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "98515579-e916-472d-99ab-5492cfa34aea" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.272 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.272 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.357 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.089s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.360 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.090s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.371 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "98515579-e916-472d-99ab-5492cfa34aea" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.100s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.377 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.519 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.520 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.520 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.521 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.643 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.704 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.706 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.804 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.806 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.866 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.868 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.960 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:09 compute-0 nova_compute[189485]: 2025-11-29 15:37:09.967 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.060 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.062 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.111 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.127 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.129 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.245 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.247 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.324 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.335 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.397 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.398 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.481 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.482 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.539 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.540 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.600 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.606 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.661 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.661 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.719 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.720 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.778 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.780 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:37:10 compute-0 nova_compute[189485]: 2025-11-29 15:37:10.847 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.343 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.344 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4602MB free_disk=72.31591796875GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.345 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.345 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.523 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.525 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.608 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:37:11 compute-0 podman[244552]: 2025-11-29 15:37:11.636985492 +0000 UTC m=+0.092486591 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.687 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.687 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.705 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.743 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.868 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.888 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.890 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.890 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.545s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:11 compute-0 nova_compute[189485]: 2025-11-29 15:37:11.890 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:13 compute-0 nova_compute[189485]: 2025-11-29 15:37:13.174 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:13 compute-0 nova_compute[189485]: 2025-11-29 15:37:13.897 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:13 compute-0 nova_compute[189485]: 2025-11-29 15:37:13.928 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:13 compute-0 nova_compute[189485]: 2025-11-29 15:37:13.929 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:37:15 compute-0 nova_compute[189485]: 2025-11-29 15:37:15.114 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:18 compute-0 nova_compute[189485]: 2025-11-29 15:37:18.177 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:20 compute-0 nova_compute[189485]: 2025-11-29 15:37:20.117 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:22 compute-0 nova_compute[189485]: 2025-11-29 15:37:22.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:37:22 compute-0 nova_compute[189485]: 2025-11-29 15:37:22.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:37:23 compute-0 nova_compute[189485]: 2025-11-29 15:37:23.182 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:24 compute-0 podman[244574]: 2025-11-29 15:37:24.663318683 +0000 UTC m=+0.102942533 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:37:25 compute-0 nova_compute[189485]: 2025-11-29 15:37:25.119 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:28 compute-0 nova_compute[189485]: 2025-11-29 15:37:28.186 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:29 compute-0 podman[203677]: time="2025-11-29T15:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:37:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:37:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:37:30 compute-0 nova_compute[189485]: 2025-11-29 15:37:30.123 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: ERROR   15:37:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: ERROR   15:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: ERROR   15:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: ERROR   15:37:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: ERROR   15:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:37:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:37:33 compute-0 nova_compute[189485]: 2025-11-29 15:37:33.189 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:33 compute-0 podman[244596]: 2025-11-29 15:37:33.710182583 +0000 UTC m=+0.143810818 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm)
Nov 29 15:37:35 compute-0 nova_compute[189485]: 2025-11-29 15:37:35.126 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:35 compute-0 podman[244617]: 2025-11-29 15:37:35.664488473 +0000 UTC m=+0.094406333 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 15:37:35 compute-0 podman[244618]: 2025-11-29 15:37:35.681240742 +0000 UTC m=+0.093892059 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 29 15:37:35 compute-0 podman[244616]: 2025-11-29 15:37:35.708319198 +0000 UTC m=+0.144584209 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, architecture=x86_64, release-0.7.12=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., container_name=kepler, vcs-type=git, com.redhat.component=ubi9-container, version=9.4)
Nov 29 15:37:35 compute-0 podman[244620]: 2025-11-29 15:37:35.732375204 +0000 UTC m=+0.143060659 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 15:37:37 compute-0 podman[244691]: 2025-11-29 15:37:37.658533078 +0000 UTC m=+0.105427398 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:37:38 compute-0 nova_compute[189485]: 2025-11-29 15:37:38.195 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:39 compute-0 podman[244709]: 2025-11-29 15:37:39.692473034 +0000 UTC m=+0.132883055 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:37:40 compute-0 nova_compute[189485]: 2025-11-29 15:37:40.129 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:42 compute-0 podman[244728]: 2025-11-29 15:37:42.683815949 +0000 UTC m=+0.113961897 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:37:43 compute-0 nova_compute[189485]: 2025-11-29 15:37:43.199 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:45 compute-0 nova_compute[189485]: 2025-11-29 15:37:45.131 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:48 compute-0 nova_compute[189485]: 2025-11-29 15:37:48.204 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:50 compute-0 nova_compute[189485]: 2025-11-29 15:37:50.134 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:53 compute-0 nova_compute[189485]: 2025-11-29 15:37:53.207 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:55 compute-0 nova_compute[189485]: 2025-11-29 15:37:55.137 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:55 compute-0 podman[244752]: 2025-11-29 15:37:55.661598067 +0000 UTC m=+0.095529523 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:37:58 compute-0 nova_compute[189485]: 2025-11-29 15:37:58.209 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:37:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:37:59.169 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:37:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:37:59.169 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:37:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:37:59.170 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:37:59 compute-0 podman[203677]: time="2025-11-29T15:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:37:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:37:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4780 "" "Go-http-client/1.1"
Nov 29 15:38:00 compute-0 nova_compute[189485]: 2025-11-29 15:38:00.139 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.052 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.053 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.053 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.053 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.060 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'name': 'vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.064 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'name': 'vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.070 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98515579-e916-472d-99ab-5492cfa34aea', 'name': 'vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.071 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.071 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.073 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:38:01.071464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.076 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.082 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.088 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes volume: 7266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.093 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.095 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.096 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.096 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:38:01.095635) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.096 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.096 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.097 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.098 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.099 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:38:01.098873) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.131 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.165 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.79296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.196 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/memory.usage volume: 49.0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.224 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.225 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.226 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.227 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.227 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.228 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.229 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.230 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:38:01.226298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.232 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.232 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.233 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.233 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.235 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.236 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.236 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.237 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.237 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets volume: 61 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.237 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:38:01.231527) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.237 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:38:01.236132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.239 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.240 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.241 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:38:01.240301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.325 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.326 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.326 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.401 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.402 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.403 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: ERROR   15:38:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: ERROR   15:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: ERROR   15:38:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: ERROR   15:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: ERROR   15:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:38:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.495 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.495 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.496 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.579 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.580 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.581 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.582 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.583 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.583 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.583 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:38:01.582500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.584 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.584 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:38:01.584269) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.610 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.610 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.610 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.637 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.638 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.638 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.664 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.664 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.665 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.690 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.691 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.691 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.692 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/cpu volume: 32700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.693 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 41620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.693 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/cpu volume: 371750000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.693 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/cpu volume: 36830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.693 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:38:01.692701) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 489570269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 78552201 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.694 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 63090868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.695 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.695 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.695 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.695 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 490412710 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.696 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 89716861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.696 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:38:01.694436) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.696 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.latency volume: 69907902 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.696 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 446638356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.696 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 82659007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.696 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 63931559 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.697 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.698 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.698 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.698 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.698 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.698 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.698 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.699 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.699 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.699 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.699 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.699 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.700 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.700 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:38:01.697984) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.700 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:38:01.701488) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.701 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.702 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.702 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.702 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.702 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.702 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.703 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.703 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.703 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.703 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.703 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.704 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.705 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.705 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.705 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.705 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.705 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:38:01.704548) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.706 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.706 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.706 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.706 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.707 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.707 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 1406170011 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.708 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 9552907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.709 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.709 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.709 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.709 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.709 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 1597389173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.710 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 9381814 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:38:01.708457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.710 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.710 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 861553512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.710 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 8222101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.712 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.712 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.712 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:38:01.711901) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.712 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.713 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.714 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.714 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.714 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.714 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 243 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.714 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.715 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.715 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.715 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.716 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.716 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:38:01.713379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.717 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.718 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.719 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:38:01.717299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.720 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:38:01.718817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.720 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.720 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.720 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.721 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.721 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.721 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:38:01.720308) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.721 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.721 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.722 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.722 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.722 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.724 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.724 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:38:01.724015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.724 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.724 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.724 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.725 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:38:01.725748) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:38:01.727035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.728 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.729 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.729 14 DEBUG ceilometer.compute.pollsters [-] 940da983-04c4-46c2-8cd4-96ce0736a67e/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.729 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.730 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:38:01.728864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:38:01.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:38:03 compute-0 nova_compute[189485]: 2025-11-29 15:38:03.213 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:04 compute-0 podman[244778]: 2025-11-29 15:38:04.711796407 +0000 UTC m=+0.144719672 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:38:05 compute-0 nova_compute[189485]: 2025-11-29 15:38:05.142 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:05 compute-0 nova_compute[189485]: 2025-11-29 15:38:05.516 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:05 compute-0 nova_compute[189485]: 2025-11-29 15:38:05.516 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:38:06 compute-0 podman[244800]: 2025-11-29 15:38:06.651769763 +0000 UTC m=+0.076507434 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 15:38:06 compute-0 podman[244799]: 2025-11-29 15:38:06.667868654 +0000 UTC m=+0.092472181 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 15:38:06 compute-0 podman[244798]: 2025-11-29 15:38:06.669172509 +0000 UTC m=+0.097806124 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, container_name=kepler, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9, version=9.4, release-0.7.12=)
Nov 29 15:38:06 compute-0 podman[244801]: 2025-11-29 15:38:06.712958253 +0000 UTC m=+0.120443411 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:38:06 compute-0 nova_compute[189485]: 2025-11-29 15:38:06.814 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:38:06 compute-0 nova_compute[189485]: 2025-11-29 15:38:06.814 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:38:06 compute-0 nova_compute[189485]: 2025-11-29 15:38:06.814 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:38:08 compute-0 nova_compute[189485]: 2025-11-29 15:38:08.216 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:08 compute-0 podman[244877]: 2025-11-29 15:38:08.64775064 +0000 UTC m=+0.095207685 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, version=9.6, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., name=ubi9-minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.144 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:10 compute-0 podman[244898]: 2025-11-29 15:38:10.647155189 +0000 UTC m=+0.093531560 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.651 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [{"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.666 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.666 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.666 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.667 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.667 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.667 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.668 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.704 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.704 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.704 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.705 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.809 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.888 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.890 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.947 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:10 compute-0 nova_compute[189485]: 2025-11-29 15:38:10.948 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.026 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.027 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.087 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.098 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.156 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.157 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.217 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.218 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.302 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.304 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.366 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.377 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.479 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.480 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.581 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.582 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.643 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.644 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.738 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.745 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.804 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.805 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.867 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.869 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.967 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:11 compute-0 nova_compute[189485]: 2025-11-29 15:38:11.968 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.029 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.407 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.409 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=72.31597518920898GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.409 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.410 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.555 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.555 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 940da983-04c4-46c2-8cd4-96ce0736a67e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.555 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.555 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.556 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.556 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.718 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.740 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.742 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:38:12 compute-0 nova_compute[189485]: 2025-11-29 15:38:12.743 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.333s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:13 compute-0 nova_compute[189485]: 2025-11-29 15:38:13.219 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:13 compute-0 nova_compute[189485]: 2025-11-29 15:38:13.559 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:13 compute-0 nova_compute[189485]: 2025-11-29 15:38:13.560 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:13 compute-0 podman[244968]: 2025-11-29 15:38:13.711426341 +0000 UTC m=+0.151589648 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:38:15 compute-0 nova_compute[189485]: 2025-11-29 15:38:15.147 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:15 compute-0 nova_compute[189485]: 2025-11-29 15:38:15.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:38:15 compute-0 nova_compute[189485]: 2025-11-29 15:38:15.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:38:18 compute-0 nova_compute[189485]: 2025-11-29 15:38:18.222 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:20 compute-0 nova_compute[189485]: 2025-11-29 15:38:20.148 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:23 compute-0 nova_compute[189485]: 2025-11-29 15:38:23.226 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:25 compute-0 nova_compute[189485]: 2025-11-29 15:38:25.151 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:26 compute-0 podman[244993]: 2025-11-29 15:38:26.65222909 +0000 UTC m=+0.100354327 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:38:28 compute-0 nova_compute[189485]: 2025-11-29 15:38:28.229 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:29 compute-0 podman[203677]: time="2025-11-29T15:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:38:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:38:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4786 "" "Go-http-client/1.1"
Nov 29 15:38:30 compute-0 nova_compute[189485]: 2025-11-29 15:38:30.154 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: ERROR   15:38:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: ERROR   15:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: ERROR   15:38:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: ERROR   15:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: ERROR   15:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:38:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:38:33 compute-0 nova_compute[189485]: 2025-11-29 15:38:33.232 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.158 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.554 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.555 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.556 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.557 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.557 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.559 189489 INFO nova.compute.manager [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Terminating instance#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.561 189489 DEBUG nova.compute.manager [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:38:35 compute-0 kernel: tap7a530c9e-47 (unregistering): left promiscuous mode
Nov 29 15:38:35 compute-0 NetworkManager[56360]: <info>  [1764430715.6174] device (tap7a530c9e-47): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.630 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 ovn_controller[97827]: 2025-11-29T15:38:35Z|00050|binding|INFO|Releasing lport 7a530c9e-4765-4cce-b971-8ebbcff0880f from this chassis (sb_readonly=0)
Nov 29 15:38:35 compute-0 ovn_controller[97827]: 2025-11-29T15:38:35Z|00051|binding|INFO|Setting lport 7a530c9e-4765-4cce-b971-8ebbcff0880f down in Southbound
Nov 29 15:38:35 compute-0 ovn_controller[97827]: 2025-11-29T15:38:35Z|00052|binding|INFO|Removing iface tap7a530c9e-47 ovn-installed in OVS
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.637 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.659 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.656 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:56:61:08 192.168.0.24'], port_security=['fa:16:3e:56:61:08 192.168.0.24'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nju3ymh64jso-rpmxigkbvqy5-bmxqrfirgt4s-port-xtgikmozjmyk', 'neutron:cidrs': '192.168.0.24/24', 'neutron:device_id': '940da983-04c4-46c2-8cd4-96ce0736a67e', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nju3ymh64jso-rpmxigkbvqy5-bmxqrfirgt4s-port-xtgikmozjmyk', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.226', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=7a530c9e-4765-4cce-b971-8ebbcff0880f) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.659 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 7a530c9e-4765-4cce-b971-8ebbcff0880f in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 unbound from our chassis#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.663 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:38:35 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Nov 29 15:38:35 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 36.467s CPU time.
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.683 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[3e44244b-aa60-4923-9d9f-c972cc286c48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:38:35 compute-0 systemd-machined[155802]: Machine qemu-2-instance-00000002 terminated.
Nov 29 15:38:35 compute-0 podman[245017]: 2025-11-29 15:38:35.693175179 +0000 UTC m=+0.132996894 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4)
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.729 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[e8887c29-71e7-4b6e-b0ac-dfd2559c01ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.735 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[d003fe37-1bfb-44b2-b0e1-41c8d6ed3c52]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.775 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[406f418e-0d24-49af-8d86-15e6137798fa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.788 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.795 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.800 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[2d96ef3c-030e-471b-be62-23a7be67a6c2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 11, 'rx_bytes': 532, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 43046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245050, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.817 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[c106f17a-5846-4406-a763-4a67418bd82d]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373741, 'tstamp': 373741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245060, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373746, 'tstamp': 373746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245060, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.819 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.821 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.828 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.828 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.828 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.829 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:38:35 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:35.829 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.852 189489 INFO nova.virt.libvirt.driver [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance destroyed successfully.#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.853 189489 DEBUG nova.objects.instance [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'resources' on Instance uuid 940da983-04c4-46c2-8cd4-96ce0736a67e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.882 189489 DEBUG nova.virt.libvirt.vif [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:27:28Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-rpmxigkbvqy5-bmxqrfirgt4s-vnf-k24hqdu6artm',id=2,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:27:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-1c17o8s3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:27:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MzAzMzM5MzQxNzY2NTgzODg0Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 29 15:38:35 compute-0 nova_compute[189485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MzAzMzM5MzQxNzY2NTgzODg0Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTMwMzMzOTM0MTc2NjU4Mzg4NDM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0zMDMzMzkzNDE3NjY1ODM4ODQzPT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=940da983-04c4-46c2-8cd4-96ce0736a67e,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.882 189489 DEBUG nova.network.os_vif_util [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "address": "fa:16:3e:56:61:08", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.24", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7a530c9e-47", "ovs_interfaceid": "7a530c9e-4765-4cce-b971-8ebbcff0880f", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.883 189489 DEBUG nova.network.os_vif_util [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.883 189489 DEBUG os_vif [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.885 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.885 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7a530c9e-47, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.887 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.889 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.891 189489 INFO os_vif [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:56:61:08,bridge_name='br-int',has_traffic_filtering=True,id=7a530c9e-4765-4cce-b971-8ebbcff0880f,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7a530c9e-47')#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.891 189489 INFO nova.virt.libvirt.driver [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Deleting instance files /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e_del#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.892 189489 INFO nova.virt.libvirt.driver [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Deletion of /var/lib/nova/instances/940da983-04c4-46c2-8cd4-96ce0736a67e_del complete#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.984 189489 DEBUG nova.virt.libvirt.host [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.985 189489 INFO nova.virt.libvirt.host [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] UEFI support detected#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.988 189489 INFO nova.compute.manager [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Took 0.43 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.989 189489 DEBUG oslo.service.loopingcall [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.990 189489 DEBUG nova.compute.manager [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:38:35 compute-0 nova_compute[189485]: 2025-11-29 15:38:35.990 189489 DEBUG nova.network.neutron [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.027 189489 DEBUG nova.compute.manager [req-fbb203a5-10f8-44a9-a820-9903a18f012e req-48972874-60d5-4473-8306-dce8b8257b98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-vif-unplugged-7a530c9e-4765-4cce-b971-8ebbcff0880f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.027 189489 DEBUG oslo_concurrency.lockutils [req-fbb203a5-10f8-44a9-a820-9903a18f012e req-48972874-60d5-4473-8306-dce8b8257b98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.028 189489 DEBUG oslo_concurrency.lockutils [req-fbb203a5-10f8-44a9-a820-9903a18f012e req-48972874-60d5-4473-8306-dce8b8257b98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.028 189489 DEBUG oslo_concurrency.lockutils [req-fbb203a5-10f8-44a9-a820-9903a18f012e req-48972874-60d5-4473-8306-dce8b8257b98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.029 189489 DEBUG nova.compute.manager [req-fbb203a5-10f8-44a9-a820-9903a18f012e req-48972874-60d5-4473-8306-dce8b8257b98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] No waiting events found dispatching network-vif-unplugged-7a530c9e-4765-4cce-b971-8ebbcff0880f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.029 189489 DEBUG nova.compute.manager [req-fbb203a5-10f8-44a9-a820-9903a18f012e req-48972874-60d5-4473-8306-dce8b8257b98 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-vif-unplugged-7a530c9e-4765-4cce-b971-8ebbcff0880f for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:38:36 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:36.060 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:38:36 compute-0 nova_compute[189485]: 2025-11-29 15:38:36.061 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:36 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:36.061 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:38:36 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:38:35.882 189489 DEBUG nova.virt.libvirt.vif [None req-6d59f1be-e0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:38:37 compute-0 podman[245072]: 2025-11-29 15:38:37.642602175 +0000 UTC m=+0.083193187 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.650 189489 DEBUG nova.network.neutron [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:38:37 compute-0 podman[245071]: 2025-11-29 15:38:37.664562734 +0000 UTC m=+0.107995562 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, release=1214.1726694543, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, vcs-type=git, managed_by=edpm_ansible)
Nov 29 15:38:37 compute-0 podman[245073]: 2025-11-29 15:38:37.66998484 +0000 UTC m=+0.101943440 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.671 189489 INFO nova.compute.manager [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Took 1.68 seconds to deallocate network for instance.#033[00m
Nov 29 15:38:37 compute-0 podman[245079]: 2025-11-29 15:38:37.704187989 +0000 UTC m=+0.133014945 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.728 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.728 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.858 189489 DEBUG nova.compute.provider_tree [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.878 189489 DEBUG nova.scheduler.client.report [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.913 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:37 compute-0 nova_compute[189485]: 2025-11-29 15:38:37.936 189489 INFO nova.scheduler.client.report [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Deleted allocations for instance 940da983-04c4-46c2-8cd4-96ce0736a67e#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.024 189489 DEBUG oslo_concurrency.lockutils [None req-6d59f1be-e0ae-48d2-90fb-48186b783814 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.469s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.158 189489 DEBUG nova.compute.manager [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.159 189489 DEBUG oslo_concurrency.lockutils [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.159 189489 DEBUG oslo_concurrency.lockutils [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.159 189489 DEBUG oslo_concurrency.lockutils [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "940da983-04c4-46c2-8cd4-96ce0736a67e-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.159 189489 DEBUG nova.compute.manager [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] No waiting events found dispatching network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.159 189489 WARNING nova.compute.manager [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received unexpected event network-vif-plugged-7a530c9e-4765-4cce-b971-8ebbcff0880f for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.160 189489 DEBUG nova.compute.manager [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Received event network-changed-7a530c9e-4765-4cce-b971-8ebbcff0880f external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.160 189489 DEBUG nova.compute.manager [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Refreshing instance network info cache due to event network-changed-7a530c9e-4765-4cce-b971-8ebbcff0880f. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.160 189489 DEBUG oslo_concurrency.lockutils [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.160 189489 DEBUG oslo_concurrency.lockutils [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.161 189489 DEBUG nova.network.neutron [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Refreshing network info cache for port 7a530c9e-4765-4cce-b971-8ebbcff0880f _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.282 189489 DEBUG nova.network.neutron [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.919 189489 DEBUG nova.network.neutron [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Instance is deleted, no further info cache update update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:106#033[00m
Nov 29 15:38:38 compute-0 nova_compute[189485]: 2025-11-29 15:38:38.919 189489 DEBUG oslo_concurrency.lockutils [req-1cb3f2d9-8fa2-4a14-af49-1041181c4196 req-034bf207-1c6f-4f0f-b67c-63b9d275519c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-940da983-04c4-46c2-8cd4-96ce0736a67e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:38:39 compute-0 podman[245148]: 2025-11-29 15:38:39.668912146 +0000 UTC m=+0.116947644 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, release=1755695350)
Nov 29 15:38:40 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:40.064 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:38:40 compute-0 nova_compute[189485]: 2025-11-29 15:38:40.160 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:40 compute-0 nova_compute[189485]: 2025-11-29 15:38:40.887 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:41 compute-0 podman[245169]: 2025-11-29 15:38:41.66741043 +0000 UTC m=+0.096984917 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd)
Nov 29 15:38:44 compute-0 podman[245189]: 2025-11-29 15:38:44.631177978 +0000 UTC m=+0.079183559 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:38:45 compute-0 nova_compute[189485]: 2025-11-29 15:38:45.163 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:45 compute-0 nova_compute[189485]: 2025-11-29 15:38:45.890 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:50 compute-0 nova_compute[189485]: 2025-11-29 15:38:50.165 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:50 compute-0 nova_compute[189485]: 2025-11-29 15:38:50.850 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764430715.8489149, 940da983-04c4-46c2-8cd4-96ce0736a67e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:38:50 compute-0 nova_compute[189485]: 2025-11-29 15:38:50.851 189489 INFO nova.compute.manager [-] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:38:50 compute-0 nova_compute[189485]: 2025-11-29 15:38:50.887 189489 DEBUG nova.compute.manager [None req-0a9280be-97d6-4234-929b-aa07652fff39 - - - - - -] [instance: 940da983-04c4-46c2-8cd4-96ce0736a67e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:38:50 compute-0 nova_compute[189485]: 2025-11-29 15:38:50.893 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:55 compute-0 nova_compute[189485]: 2025-11-29 15:38:55.167 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:55 compute-0 nova_compute[189485]: 2025-11-29 15:38:55.894 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:38:57 compute-0 podman[245214]: 2025-11-29 15:38:57.65038281 +0000 UTC m=+0.088638862 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:38:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:59.171 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:38:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:59.171 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:38:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:38:59.172 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:38:59 compute-0 podman[203677]: time="2025-11-29T15:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:38:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:38:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4777 "" "Go-http-client/1.1"
Nov 29 15:39:00 compute-0 nova_compute[189485]: 2025-11-29 15:39:00.170 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:00 compute-0 nova_compute[189485]: 2025-11-29 15:39:00.897 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: ERROR   15:39:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: ERROR   15:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: ERROR   15:39:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: ERROR   15:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: ERROR   15:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:39:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:39:05 compute-0 nova_compute[189485]: 2025-11-29 15:39:05.172 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:05 compute-0 nova_compute[189485]: 2025-11-29 15:39:05.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:05 compute-0 nova_compute[189485]: 2025-11-29 15:39:05.900 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:06 compute-0 nova_compute[189485]: 2025-11-29 15:39:06.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:06 compute-0 nova_compute[189485]: 2025-11-29 15:39:06.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:39:06 compute-0 podman[245238]: 2025-11-29 15:39:06.695329464 +0000 UTC m=+0.141032060 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Nov 29 15:39:06 compute-0 nova_compute[189485]: 2025-11-29 15:39:06.843 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:39:06 compute-0 nova_compute[189485]: 2025-11-29 15:39:06.843 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:39:06 compute-0 nova_compute[189485]: 2025-11-29 15:39:06.844 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:39:08 compute-0 nova_compute[189485]: 2025-11-29 15:39:08.413 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updating instance_info_cache with network_info: [{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:39:08 compute-0 nova_compute[189485]: 2025-11-29 15:39:08.440 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:39:08 compute-0 nova_compute[189485]: 2025-11-29 15:39:08.440 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:39:08 compute-0 nova_compute[189485]: 2025-11-29 15:39:08.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:08 compute-0 podman[245260]: 2025-11-29 15:39:08.671974441 +0000 UTC m=+0.087482051 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 15:39:08 compute-0 podman[245261]: 2025-11-29 15:39:08.679760891 +0000 UTC m=+0.089121696 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125)
Nov 29 15:39:08 compute-0 podman[245259]: 2025-11-29 15:39:08.684853327 +0000 UTC m=+0.119010338 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, release=1214.1726694543, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, version=9.4, container_name=kepler, vendor=Red Hat, Inc., architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Nov 29 15:39:08 compute-0 podman[245267]: 2025-11-29 15:39:08.722407656 +0000 UTC m=+0.135937223 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.478 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.541 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.542 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.542 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.543 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.640 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.736 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.737 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.839 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.840 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:09 compute-0 podman[245343]: 2025-11-29 15:39:09.89978728 +0000 UTC m=+0.122196254 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.buildah.version=1.33.7, name=ubi9-minimal, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.914 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.915 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.982 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:09 compute-0 nova_compute[189485]: 2025-11-29 15:39:09.988 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.055 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.057 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.125 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.127 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.175 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.213 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.214 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.278 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.285 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.342 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.343 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.396 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.397 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.451 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.452 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.506 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:39:10 compute-0 ovn_controller[97827]: 2025-11-29T15:39:10Z|00053|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Nov 29 15:39:10 compute-0 nova_compute[189485]: 2025-11-29 15:39:10.905 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.018 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.019 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4775MB free_disk=72.33848190307617GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.019 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.020 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.139 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.140 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.140 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.140 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.140 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.244 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.261 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.411 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:39:11 compute-0 nova_compute[189485]: 2025-11-29 15:39:11.412 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:39:12 compute-0 podman[245398]: 2025-11-29 15:39:12.699886392 +0000 UTC m=+0.136689914 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:39:14 compute-0 nova_compute[189485]: 2025-11-29 15:39:14.413 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:14 compute-0 nova_compute[189485]: 2025-11-29 15:39:14.414 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:14 compute-0 podman[245418]: 2025-11-29 15:39:14.791711993 +0000 UTC m=+0.080825033 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:39:15 compute-0 nova_compute[189485]: 2025-11-29 15:39:15.178 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:15 compute-0 nova_compute[189485]: 2025-11-29 15:39:15.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:15 compute-0 nova_compute[189485]: 2025-11-29 15:39:15.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:39:15 compute-0 nova_compute[189485]: 2025-11-29 15:39:15.909 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:16 compute-0 nova_compute[189485]: 2025-11-29 15:39:16.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:39:20 compute-0 nova_compute[189485]: 2025-11-29 15:39:20.181 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:20 compute-0 nova_compute[189485]: 2025-11-29 15:39:20.912 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:25 compute-0 nova_compute[189485]: 2025-11-29 15:39:25.184 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:25 compute-0 nova_compute[189485]: 2025-11-29 15:39:25.916 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:28 compute-0 podman[245442]: 2025-11-29 15:39:28.61696612 +0000 UTC m=+0.064396651 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:39:29 compute-0 podman[203677]: time="2025-11-29T15:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:39:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:39:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4785 "" "Go-http-client/1.1"
Nov 29 15:39:30 compute-0 nova_compute[189485]: 2025-11-29 15:39:30.187 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:30 compute-0 nova_compute[189485]: 2025-11-29 15:39:30.918 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: ERROR   15:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: ERROR   15:39:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: ERROR   15:39:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: ERROR   15:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:39:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:39:35 compute-0 nova_compute[189485]: 2025-11-29 15:39:35.190 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:35 compute-0 nova_compute[189485]: 2025-11-29 15:39:35.921 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:37 compute-0 podman[245465]: 2025-11-29 15:39:37.706033761 +0000 UTC m=+0.141213405 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:39:39 compute-0 podman[245484]: 2025-11-29 15:39:39.679691242 +0000 UTC m=+0.115621147 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 29 15:39:39 compute-0 podman[245486]: 2025-11-29 15:39:39.685309443 +0000 UTC m=+0.107839279 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:39:39 compute-0 podman[245485]: 2025-11-29 15:39:39.697045809 +0000 UTC m=+0.126264454 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 15:39:39 compute-0 podman[245487]: 2025-11-29 15:39:39.74772123 +0000 UTC m=+0.169704871 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 15:39:40 compute-0 nova_compute[189485]: 2025-11-29 15:39:40.193 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:40 compute-0 podman[245559]: 2025-11-29 15:39:40.696482522 +0000 UTC m=+0.139418407 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 29 15:39:40 compute-0 nova_compute[189485]: 2025-11-29 15:39:40.926 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:43 compute-0 podman[245579]: 2025-11-29 15:39:43.71822765 +0000 UTC m=+0.150158706 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:39:45 compute-0 nova_compute[189485]: 2025-11-29 15:39:45.197 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:45 compute-0 podman[245599]: 2025-11-29 15:39:45.622433822 +0000 UTC m=+0.062144831 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:39:45 compute-0 nova_compute[189485]: 2025-11-29 15:39:45.930 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:50 compute-0 nova_compute[189485]: 2025-11-29 15:39:50.199 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:50 compute-0 nova_compute[189485]: 2025-11-29 15:39:50.933 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:55 compute-0 nova_compute[189485]: 2025-11-29 15:39:55.202 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:55 compute-0 nova_compute[189485]: 2025-11-29 15:39:55.936 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:39:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:39:59.185 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:39:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:39:59.186 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:39:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:39:59.186 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:39:59 compute-0 podman[245626]: 2025-11-29 15:39:59.638242731 +0000 UTC m=+0.086049563 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:39:59 compute-0 podman[203677]: time="2025-11-29T15:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:39:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:39:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4782 "" "Go-http-client/1.1"
Nov 29 15:40:00 compute-0 nova_compute[189485]: 2025-11-29 15:40:00.204 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:00 compute-0 nova_compute[189485]: 2025-11-29 15:40:00.939 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.053 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.053 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.054 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.055 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.064 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'name': 'vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.078 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.083 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '98515579-e916-472d-99ab-5492cfa34aea', 'name': 'vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.084 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:40:01.084979) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.092 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.098 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.104 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.105 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.106 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.107 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.107 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.108 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.108 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:40:01.106289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:40:01.109390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.148 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.189 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.224 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.226 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.227 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.227 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.228 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.228 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.229 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.229 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.230 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.231 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.231 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.233 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:40:01.226732) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.234 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:40:01.230453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.234 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.234 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.235 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.236 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.238 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:40:01.234130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.238 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:40:01.239319) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.351 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.352 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.353 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: ERROR   15:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: ERROR   15:40:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: ERROR   15:40:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: ERROR   15:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:40:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.483 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.485 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.486 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.600 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.601 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.602 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.603 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.603 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.604 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.604 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.605 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.605 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:40:01.604516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.606 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.606 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.607 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.607 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.607 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.608 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.608 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.608 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.609 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:40:01.608414) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.655 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.655 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.656 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.695 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.696 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.696 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.730 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.731 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.731 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.733 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.735 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/cpu volume: 34480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.735 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 43360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.736 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/cpu volume: 38590000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:40:01.734195) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.737 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.739 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 489570269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:40:01.738103) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.739 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 78552201 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.740 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 63090868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.740 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.741 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.741 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.742 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 446638356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.742 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 82659007 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.742 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.latency volume: 63931559 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.744 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.744 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.744 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.744 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.745 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.746 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.747 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.747 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:40:01.745542) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.747 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.748 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.748 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.748 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.749 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.749 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.750 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.751 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.751 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.752 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.752 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.752 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.753 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.754 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.753 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:40:01.752638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.754 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.755 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.755 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.756 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.757 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.757 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.758 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.759 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.760 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.760 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.761 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.761 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.761 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.762 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.763 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.763 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:40:01.762082) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.764 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.764 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.765 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.765 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.766 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.766 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.767 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.768 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.769 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.769 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.769 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.769 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.769 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.770 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 1406170011 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.770 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 9552907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.770 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.771 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.771 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.771 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.772 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 861553512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.772 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 8222101 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.772 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:40:01.769766) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.773 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.773 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.773 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.773 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.773 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.774 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.774 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.774 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.775 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.774 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:40:01.773893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.775 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.776 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.776 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.776 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.777 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.777 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.777 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.777 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.778 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.778 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.778 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.779 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.777 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:40:01.776202) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.779 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.779 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.780 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.780 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.780 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.781 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.781 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.781 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:40:01.780198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.782 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.782 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.782 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:40:01.782462) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.783 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.783 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.784 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.784 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.784 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.784 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.785 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.785 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.785 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.785 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.786 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.786 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 22290432 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.785 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:40:01.784238) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.786 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.787 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.787 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.787 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.788 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.788 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.788 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.789 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.789 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.789 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:40:01.788219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.789 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.790 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.790 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.790 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.790 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.791 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:40:01.790391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.791 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.791 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.791 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.792 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.793 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.793 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.793 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:40:01.792208) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.793 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.794 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.794 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.794 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.795 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:40:01.794341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.794 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.795 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.796 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.796 14 DEBUG ceilometer.compute.pollsters [-] 98515579-e916-472d-99ab-5492cfa34aea/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:40:01.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:40:05 compute-0 nova_compute[189485]: 2025-11-29 15:40:05.207 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:05 compute-0 nova_compute[189485]: 2025-11-29 15:40:05.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:05 compute-0 nova_compute[189485]: 2025-11-29 15:40:05.942 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:08 compute-0 nova_compute[189485]: 2025-11-29 15:40:08.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:08 compute-0 nova_compute[189485]: 2025-11-29 15:40:08.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:40:08 compute-0 podman[245651]: 2025-11-29 15:40:08.687207369 +0000 UTC m=+0.130253191 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:40:08 compute-0 nova_compute[189485]: 2025-11-29 15:40:08.916 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:40:08 compute-0 nova_compute[189485]: 2025-11-29 15:40:08.917 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:40:08 compute-0 nova_compute[189485]: 2025-11-29 15:40:08.918 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:40:10 compute-0 nova_compute[189485]: 2025-11-29 15:40:10.210 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:10 compute-0 podman[245671]: 2025-11-29 15:40:10.648330489 +0000 UTC m=+0.079511217 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:40:10 compute-0 podman[245670]: 2025-11-29 15:40:10.652544173 +0000 UTC m=+0.098004075 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, name=ubi9, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 29 15:40:10 compute-0 podman[245672]: 2025-11-29 15:40:10.68779475 +0000 UTC m=+0.123363556 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:40:10 compute-0 podman[245673]: 2025-11-29 15:40:10.702469683 +0000 UTC m=+0.137989738 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:40:10 compute-0 nova_compute[189485]: 2025-11-29 15:40:10.946 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:10 compute-0 nova_compute[189485]: 2025-11-29 15:40:10.953 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.115 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.116 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.117 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.118 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.140 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.141 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.142 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.142 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.251 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.333 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.334 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.395 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.397 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.472 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.473 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.552 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.561 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.639 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.640 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 podman[245761]: 2025-11-29 15:40:11.641687327 +0000 UTC m=+0.097905891 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, managed_by=edpm_ansible, version=9.6, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.703 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.704 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.805 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.807 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.864 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.871 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.929 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.931 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.993 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:11 compute-0 nova_compute[189485]: 2025-11-29 15:40:11.996 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.057 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.058 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.115 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.511 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.513 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4764MB free_disk=72.33856964111328GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.513 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.514 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.633 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.633 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 98515579-e916-472d-99ab-5492cfa34aea actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.634 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.634 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.634 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.737 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.758 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.760 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:40:12 compute-0 nova_compute[189485]: 2025-11-29 15:40:12.761 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:13 compute-0 nova_compute[189485]: 2025-11-29 15:40:13.128 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:13 compute-0 nova_compute[189485]: 2025-11-29 15:40:13.128 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:13 compute-0 nova_compute[189485]: 2025-11-29 15:40:13.128 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:13 compute-0 nova_compute[189485]: 2025-11-29 15:40:13.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:14 compute-0 podman[245805]: 2025-11-29 15:40:14.690630187 +0000 UTC m=+0.122402571 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:40:15 compute-0 nova_compute[189485]: 2025-11-29 15:40:15.213 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:15 compute-0 nova_compute[189485]: 2025-11-29 15:40:15.949 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:16 compute-0 podman[245824]: 2025-11-29 15:40:16.682317008 +0000 UTC m=+0.122479621 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:40:17 compute-0 nova_compute[189485]: 2025-11-29 15:40:17.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:40:17 compute-0 nova_compute[189485]: 2025-11-29 15:40:17.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:40:20 compute-0 nova_compute[189485]: 2025-11-29 15:40:20.217 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:20 compute-0 nova_compute[189485]: 2025-11-29 15:40:20.953 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:25 compute-0 nova_compute[189485]: 2025-11-29 15:40:25.220 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:25 compute-0 nova_compute[189485]: 2025-11-29 15:40:25.956 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:29 compute-0 podman[203677]: time="2025-11-29T15:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:40:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:40:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Nov 29 15:40:30 compute-0 nova_compute[189485]: 2025-11-29 15:40:30.222 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:30 compute-0 podman[245848]: 2025-11-29 15:40:30.656004379 +0000 UTC m=+0.090874203 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:40:30 compute-0 nova_compute[189485]: 2025-11-29 15:40:30.959 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: ERROR   15:40:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: ERROR   15:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: ERROR   15:40:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: ERROR   15:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:40:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:40:35 compute-0 nova_compute[189485]: 2025-11-29 15:40:35.227 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:35 compute-0 nova_compute[189485]: 2025-11-29 15:40:35.962 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.147 189489 DEBUG nova.compute.manager [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-changed-05839a7c-53a3-4f4b-b076-68284d149a00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.148 189489 DEBUG nova.compute.manager [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Refreshing instance network info cache due to event network-changed-05839a7c-53a3-4f4b-b076-68284d149a00. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.148 189489 DEBUG oslo_concurrency.lockutils [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.149 189489 DEBUG oslo_concurrency.lockutils [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.149 189489 DEBUG nova.network.neutron [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Refreshing network info cache for port 05839a7c-53a3-4f4b-b076-68284d149a00 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.734 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.735 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.735 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.736 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.736 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.739 189489 INFO nova.compute.manager [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Terminating instance#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.741 189489 DEBUG nova.compute.manager [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:40:37 compute-0 kernel: tap05839a7c-53 (unregistering): left promiscuous mode
Nov 29 15:40:37 compute-0 NetworkManager[56360]: <info>  [1764430837.8153] device (tap05839a7c-53): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:40:37 compute-0 ovn_controller[97827]: 2025-11-29T15:40:37Z|00054|binding|INFO|Releasing lport 05839a7c-53a3-4f4b-b076-68284d149a00 from this chassis (sb_readonly=0)
Nov 29 15:40:37 compute-0 ovn_controller[97827]: 2025-11-29T15:40:37Z|00055|binding|INFO|Setting lport 05839a7c-53a3-4f4b-b076-68284d149a00 down in Southbound
Nov 29 15:40:37 compute-0 ovn_controller[97827]: 2025-11-29T15:40:37Z|00056|binding|INFO|Removing iface tap05839a7c-53 ovn-installed in OVS
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.826 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.839 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:48:4a:52 192.168.0.227'], port_security=['fa:16:3e:48:4a:52 192.168.0.227'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nju3ymh64jso-aat7xqwj3j4y-2ikheen5x3vw-port-q265egptd67m', 'neutron:cidrs': '192.168.0.227/24', 'neutron:device_id': '98515579-e916-472d-99ab-5492cfa34aea', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nju3ymh64jso-aat7xqwj3j4y-2ikheen5x3vw-port-q265egptd67m', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=05839a7c-53a3-4f4b-b076-68284d149a00) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.841 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 05839a7c-53a3-4f4b-b076-68284d149a00 in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 unbound from our chassis#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.845 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.852 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.868 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a47fbe9f-0ed1-44c8-8b9e-8d564af6d73b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:40:37 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Nov 29 15:40:37 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 39.769s CPU time.
Nov 29 15:40:37 compute-0 systemd-machined[155802]: Machine qemu-3-instance-00000003 terminated.
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.907 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[59211752-aa65-4daf-ae05-b68c17c2c477]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.910 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[517e372b-144b-4043-9660-1f3d5aa91086]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.935 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[18c661c9-8f32-4527-975e-8e585baab02d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.946 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.949 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.955 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9fbbc-3d6f-4cc2-a206-b8b13c6e02e2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 13, 'rx_bytes': 532, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 43046, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245883, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.973 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[958fc6c6-2811-4254-ba03-4fa7a27a415c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373741, 'tstamp': 373741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245885, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373746, 'tstamp': 373746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245885, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.975 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.977 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:37 compute-0 nova_compute[189485]: 2025-11-29 15:40:37.981 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.982 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.982 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.982 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.983 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:40:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:37.984 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.031 189489 INFO nova.virt.libvirt.driver [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Instance destroyed successfully.#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.031 189489 DEBUG nova.objects.instance [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'resources' on Instance uuid 98515579-e916-472d-99ab-5492cfa34aea obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.045 189489 DEBUG nova.virt.libvirt.vif [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:32:42Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-aat7xqwj3j4y-2ikheen5x3vw-vnf-jrc2qenwdglw',id=3,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:32:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-gd7j7brc',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:32:49Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODg5OTgxMzc4NTg4NDI1MzM1OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 29 15:40:38 compute-0 nova_compute[189485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODg5OTgxMzc4NTg4NDI1MzM1OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg4OTk4MTM3ODU4ODQyNTMzNTg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04ODk5ODEzNzg1ODg0MjUzMzU4PT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=98515579-e916-472d-99ab-5492cfa34aea,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.045 189489 DEBUG nova.network.os_vif_util [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.177", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.045 189489 DEBUG nova.network.os_vif_util [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.046 189489 DEBUG os_vif [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.047 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.047 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05839a7c-53, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.049 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.051 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.052 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.054 189489 INFO os_vif [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:48:4a:52,bridge_name='br-int',has_traffic_filtering=True,id=05839a7c-53a3-4f4b-b076-68284d149a00,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap05839a7c-53')#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.055 189489 INFO nova.virt.libvirt.driver [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Deleting instance files /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea_del#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.055 189489 INFO nova.virt.libvirt.driver [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Deletion of /var/lib/nova/instances/98515579-e916-472d-99ab-5492cfa34aea_del complete#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.132 189489 INFO nova.compute.manager [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.133 189489 DEBUG oslo.service.loopingcall [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.133 189489 DEBUG nova.compute.manager [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.133 189489 DEBUG nova.network.neutron [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.173 189489 DEBUG nova.compute.manager [req-89faf0c8-753a-433c-83c3-6b3308f1a888 req-fe4a3bd3-f963-4254-8397-e687e951eeb0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-vif-unplugged-05839a7c-53a3-4f4b-b076-68284d149a00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.174 189489 DEBUG oslo_concurrency.lockutils [req-89faf0c8-753a-433c-83c3-6b3308f1a888 req-fe4a3bd3-f963-4254-8397-e687e951eeb0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.174 189489 DEBUG oslo_concurrency.lockutils [req-89faf0c8-753a-433c-83c3-6b3308f1a888 req-fe4a3bd3-f963-4254-8397-e687e951eeb0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.174 189489 DEBUG oslo_concurrency.lockutils [req-89faf0c8-753a-433c-83c3-6b3308f1a888 req-fe4a3bd3-f963-4254-8397-e687e951eeb0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.174 189489 DEBUG nova.compute.manager [req-89faf0c8-753a-433c-83c3-6b3308f1a888 req-fe4a3bd3-f963-4254-8397-e687e951eeb0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] No waiting events found dispatching network-vif-unplugged-05839a7c-53a3-4f4b-b076-68284d149a00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.174 189489 DEBUG nova.compute.manager [req-89faf0c8-753a-433c-83c3-6b3308f1a888 req-fe4a3bd3-f963-4254-8397-e687e951eeb0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-vif-unplugged-05839a7c-53a3-4f4b-b076-68284d149a00 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.521 189489 DEBUG nova.network.neutron [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updated VIF entry in instance network info cache for port 05839a7c-53a3-4f4b-b076-68284d149a00. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.522 189489 DEBUG nova.network.neutron [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updating instance_info_cache with network_info: [{"id": "05839a7c-53a3-4f4b-b076-68284d149a00", "address": "fa:16:3e:48:4a:52", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.227", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05839a7c-53", "ovs_interfaceid": "05839a7c-53a3-4f4b-b076-68284d149a00", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:40:38 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:40:38.045 189489 DEBUG nova.virt.libvirt.vif [None req-7f7ea1d8-1c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:40:38 compute-0 nova_compute[189485]: 2025-11-29 15:40:38.544 189489 DEBUG oslo_concurrency.lockutils [req-b78d4061-1e00-4748-98a4-d5e7bdb41349 req-55281fdc-0130-43c6-b05c-5199bdeb715e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-98515579-e916-472d-99ab-5492cfa34aea" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.423 189489 DEBUG nova.network.neutron [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.442 189489 INFO nova.compute.manager [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Took 1.31 seconds to deallocate network for instance.#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.491 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.492 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.625 189489 DEBUG nova.compute.provider_tree [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.644 189489 DEBUG nova.scheduler.client.report [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.689 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:39 compute-0 podman[245907]: 2025-11-29 15:40:39.703770569 +0000 UTC m=+0.140233558 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.727 189489 INFO nova.scheduler.client.report [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Deleted allocations for instance 98515579-e916-472d-99ab-5492cfa34aea#033[00m
Nov 29 15:40:39 compute-0 nova_compute[189485]: 2025-11-29 15:40:39.811 189489 DEBUG oslo_concurrency.lockutils [None req-7f7ea1d8-1c04-45d0-8f9a-3c6c3d9b0ab2 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.076s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.231 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.357 189489 DEBUG nova.compute.manager [req-4e332f7c-a317-41b1-942a-1e45e8a4f50f req-c2a2d3c9-19c7-430a-95a5-dbfab9c8f9cf 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.357 189489 DEBUG oslo_concurrency.lockutils [req-4e332f7c-a317-41b1-942a-1e45e8a4f50f req-c2a2d3c9-19c7-430a-95a5-dbfab9c8f9cf 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "98515579-e916-472d-99ab-5492cfa34aea-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.358 189489 DEBUG oslo_concurrency.lockutils [req-4e332f7c-a317-41b1-942a-1e45e8a4f50f req-c2a2d3c9-19c7-430a-95a5-dbfab9c8f9cf 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.358 189489 DEBUG oslo_concurrency.lockutils [req-4e332f7c-a317-41b1-942a-1e45e8a4f50f req-c2a2d3c9-19c7-430a-95a5-dbfab9c8f9cf 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "98515579-e916-472d-99ab-5492cfa34aea-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.358 189489 DEBUG nova.compute.manager [req-4e332f7c-a317-41b1-942a-1e45e8a4f50f req-c2a2d3c9-19c7-430a-95a5-dbfab9c8f9cf 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] No waiting events found dispatching network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:40:40 compute-0 nova_compute[189485]: 2025-11-29 15:40:40.359 189489 WARNING nova.compute.manager [req-4e332f7c-a317-41b1-942a-1e45e8a4f50f req-c2a2d3c9-19c7-430a-95a5-dbfab9c8f9cf 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Received unexpected event network-vif-plugged-05839a7c-53a3-4f4b-b076-68284d149a00 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:40:41 compute-0 podman[245924]: 2025-11-29 15:40:41.675461962 +0000 UTC m=+0.102933246 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, architecture=x86_64, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, managed_by=edpm_ansible, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30)
Nov 29 15:40:41 compute-0 podman[245925]: 2025-11-29 15:40:41.705550461 +0000 UTC m=+0.130826585 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 15:40:41 compute-0 podman[245927]: 2025-11-29 15:40:41.724548482 +0000 UTC m=+0.144855943 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Nov 29 15:40:41 compute-0 podman[245926]: 2025-11-29 15:40:41.725633481 +0000 UTC m=+0.145904131 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm)
Nov 29 15:40:41 compute-0 podman[245992]: 2025-11-29 15:40:41.811442257 +0000 UTC m=+0.109500954 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.buildah.version=1.33.7)
Nov 29 15:40:43 compute-0 nova_compute[189485]: 2025-11-29 15:40:43.051 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:44 compute-0 podman[246021]: 2025-11-29 15:40:44.856529049 +0000 UTC m=+0.105387002 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 15:40:45 compute-0 nova_compute[189485]: 2025-11-29 15:40:45.236 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:47 compute-0 podman[246041]: 2025-11-29 15:40:47.637108206 +0000 UTC m=+0.074734658 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:40:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:47.986 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:40:48 compute-0 nova_compute[189485]: 2025-11-29 15:40:48.055 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:50 compute-0 nova_compute[189485]: 2025-11-29 15:40:50.239 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:53 compute-0 nova_compute[189485]: 2025-11-29 15:40:53.030 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764430838.0284672, 98515579-e916-472d-99ab-5492cfa34aea => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:40:53 compute-0 nova_compute[189485]: 2025-11-29 15:40:53.032 189489 INFO nova.compute.manager [-] [instance: 98515579-e916-472d-99ab-5492cfa34aea] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:40:53 compute-0 nova_compute[189485]: 2025-11-29 15:40:53.058 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:53 compute-0 nova_compute[189485]: 2025-11-29 15:40:53.073 189489 DEBUG nova.compute.manager [None req-31ac0b3a-fd75-49cb-8c79-5b4f41dd65bf - - - - - -] [instance: 98515579-e916-472d-99ab-5492cfa34aea] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:40:55 compute-0 nova_compute[189485]: 2025-11-29 15:40:55.243 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:58 compute-0 nova_compute[189485]: 2025-11-29 15:40:58.061 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:40:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:59.187 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:40:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:59.187 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:40:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:40:59.189 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:40:59 compute-0 systemd-logind[794]: New session 29 of user zuul.
Nov 29 15:40:59 compute-0 systemd[1]: Started Session 29 of User zuul.
Nov 29 15:40:59 compute-0 podman[203677]: time="2025-11-29T15:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:40:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:40:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:41:00 compute-0 nova_compute[189485]: 2025-11-29 15:41:00.246 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:00 compute-0 python3[246245]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: ERROR   15:41:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: ERROR   15:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: ERROR   15:41:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: ERROR   15:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:41:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:41:01 compute-0 podman[246284]: 2025-11-29 15:41:01.711066864 +0000 UTC m=+0.144319209 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:41:03 compute-0 nova_compute[189485]: 2025-11-29 15:41:03.064 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:05 compute-0 nova_compute[189485]: 2025-11-29 15:41:05.249 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:05 compute-0 nova_compute[189485]: 2025-11-29 15:41:05.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:08 compute-0 nova_compute[189485]: 2025-11-29 15:41:08.067 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.979 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.980 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.981 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:41:09 compute-0 nova_compute[189485]: 2025-11-29 15:41:09.982 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:41:10 compute-0 nova_compute[189485]: 2025-11-29 15:41:10.252 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:10 compute-0 podman[246307]: 2025-11-29 15:41:10.689970119 +0000 UTC m=+0.137148576 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:41:10 compute-0 ovn_controller[97827]: 2025-11-29T15:41:10Z|00057|memory_trim|INFO|Detected inactivity (last active 30010 ms ago): trimming memory
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.489 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.523 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.524 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.568 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.568 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.568 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.568 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.679 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.740 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.741 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.800 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.801 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.858 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.859 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.920 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.928 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.984 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:11 compute-0 nova_compute[189485]: 2025-11-29 15:41:11.985 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.045 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.046 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.139 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.140 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.222 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.580 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.581 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4912MB free_disk=72.36064529418945GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.582 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.582 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:12 compute-0 podman[246351]: 2025-11-29 15:41:12.637621407 +0000 UTC m=+0.088148069 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:41:12 compute-0 podman[246352]: 2025-11-29 15:41:12.64183451 +0000 UTC m=+0.088154079 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi)
Nov 29 15:41:12 compute-0 podman[246350]: 2025-11-29 15:41:12.656956127 +0000 UTC m=+0.110388947 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible)
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.671 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.671 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.671 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.671 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:41:12 compute-0 podman[246357]: 2025-11-29 15:41:12.684683732 +0000 UTC m=+0.120560570 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Nov 29 15:41:12 compute-0 podman[246353]: 2025-11-29 15:41:12.684814795 +0000 UTC m=+0.121605988 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.784 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.799 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.824 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:41:12 compute-0 nova_compute[189485]: 2025-11-29 15:41:12.824 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:13 compute-0 nova_compute[189485]: 2025-11-29 15:41:13.071 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:13 compute-0 nova_compute[189485]: 2025-11-29 15:41:13.784 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:13 compute-0 nova_compute[189485]: 2025-11-29 15:41:13.784 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:13 compute-0 nova_compute[189485]: 2025-11-29 15:41:13.785 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:13 compute-0 nova_compute[189485]: 2025-11-29 15:41:13.786 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:13 compute-0 nova_compute[189485]: 2025-11-29 15:41:13.787 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:15 compute-0 nova_compute[189485]: 2025-11-29 15:41:15.255 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:15 compute-0 podman[246447]: 2025-11-29 15:41:15.636736435 +0000 UTC m=+0.090379100 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125)
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.627 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "89d41ab5-c0e8-4371-b48e-c118019b2a97" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.627 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "89d41ab5-c0e8-4371-b48e-c118019b2a97" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.655 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.752 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.753 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.763 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.763 189489 INFO nova.compute.claims [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.932 189489 DEBUG nova.compute.provider_tree [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.951 189489 DEBUG nova.scheduler.client.report [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.970 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.218s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:16 compute-0 nova_compute[189485]: 2025-11-29 15:41:16.971 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.022 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.039 189489 INFO nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.072 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.168 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.169 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.170 189489 INFO nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Creating image(s)#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.170 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.171 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.171 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.172 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a9699c1a698d6502fb8d031636af19823e4dc525" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:17 compute-0 nova_compute[189485]: 2025-11-29 15:41:17.172 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a9699c1a698d6502fb8d031636af19823e4dc525" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.074 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.584 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:18 compute-0 podman[246466]: 2025-11-29 15:41:18.663633438 +0000 UTC m=+0.102113514 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.670 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.part --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.673 189489 DEBUG nova.virt.images [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] 07af09cd-6341-4caf-928b-206788b98d53 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.675 189489 DEBUG nova.privsep.utils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.676 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.part /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.928 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.part /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.converted" returned: 0 in 0.252s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:18 compute-0 nova_compute[189485]: 2025-11-29 15:41:18.939 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.027 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525.converted --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.028 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a9699c1a698d6502fb8d031636af19823e4dc525" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.856s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.053 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.146 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.147 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "a9699c1a698d6502fb8d031636af19823e4dc525" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.147 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a9699c1a698d6502fb8d031636af19823e4dc525" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.158 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.211 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.212 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525,backing_fmt=raw /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.256 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525,backing_fmt=raw /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.257 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "a9699c1a698d6502fb8d031636af19823e4dc525" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.257 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.360 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.361 189489 DEBUG nova.virt.disk.api [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Checking if we can resize image /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.361 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.441 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.442 189489 DEBUG nova.virt.disk.api [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Cannot resize image /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.443 189489 DEBUG nova.objects.instance [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'migration_context' on Instance uuid 89d41ab5-c0e8-4371-b48e-c118019b2a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.473 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.474 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.474 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.487 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.487 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.488 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.558 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.559 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.560 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.572 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.629 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.630 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.671 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.eph0 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.672 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.672 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.738 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.739 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.739 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Ensure instance console log exists: /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.739 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.740 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.740 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.742 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:41:04Z,direct_url=<?>,disk_format='qcow2',id=07af09cd-6341-4caf-928b-206788b98d53,min_disk=0,min_ram=0,name='fvt_testing_image',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:41:09Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '07af09cd-6341-4caf-928b-206788b98d53'}], 'ephemerals': [{'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'encryption_format': None, 'size': 1, 'guest_format': None, 'encrypted': False}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.747 189489 WARNING nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.757 189489 DEBUG nova.virt.libvirt.host [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.758 189489 DEBUG nova.virt.libvirt.host [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.763 189489 DEBUG nova.virt.libvirt.host [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.763 189489 DEBUG nova.virt.libvirt.host [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.764 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.764 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:41:11Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c04e987e-66f0-4e56-8d3e-f538cbb5e980',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-11-29T15:41:04Z,direct_url=<?>,disk_format='qcow2',id=07af09cd-6341-4caf-928b-206788b98d53,min_disk=0,min_ram=0,name='fvt_testing_image',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-11-29T15:41:09Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.765 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.765 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.765 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.765 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.765 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.766 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.766 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.766 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.766 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.767 189489 DEBUG nova.virt.hardware [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.770 189489 DEBUG nova.objects.instance [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 89d41ab5-c0e8-4371-b48e-c118019b2a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.794 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <uuid>89d41ab5-c0e8-4371-b48e-c118019b2a97</uuid>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <name>instance-00000005</name>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <memory>524288</memory>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:name>fvt_testing_server</nova:name>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:41:19</nova:creationTime>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:flavor name="fvt_testing_flavor">
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:memory>512</nova:memory>
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:ephemeral>1</nova:ephemeral>
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:user uuid="5cbf094e2197487fbe16a0fe6e3076ba">admin</nova:user>
Nov 29 15:41:19 compute-0 nova_compute[189485]:        <nova:project uuid="04d676205d9142d19f3d4ce7389f72a2">admin</nova:project>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="07af09cd-6341-4caf-928b-206788b98d53"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <nova:ports/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <system>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <entry name="serial">89d41ab5-c0e8-4371-b48e-c118019b2a97</entry>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <entry name="uuid">89d41ab5-c0e8-4371-b48e-c118019b2a97</entry>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </system>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <os>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </os>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <features>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </features>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.eph0"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <target dev="vdb" bus="virtio"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.config"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/console.log" append="off"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <video>
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </video>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:41:19 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:41:19 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:41:19 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:41:19 compute-0 nova_compute[189485]: </domain>
Nov 29 15:41:19 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.846 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.847 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.848 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:41:19 compute-0 nova_compute[189485]: 2025-11-29 15:41:19.849 189489 INFO nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Using config drive#033[00m
Nov 29 15:41:20 compute-0 nova_compute[189485]: 2025-11-29 15:41:20.257 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:20 compute-0 nova_compute[189485]: 2025-11-29 15:41:20.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:41:20 compute-0 nova_compute[189485]: 2025-11-29 15:41:20.519 189489 INFO nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Creating config drive at /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.config#033[00m
Nov 29 15:41:20 compute-0 nova_compute[189485]: 2025-11-29 15:41:20.524 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j8agkwg execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:41:20 compute-0 nova_compute[189485]: 2025-11-29 15:41:20.651 189489 DEBUG oslo_concurrency.processutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp6j8agkwg" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:41:20 compute-0 systemd-machined[155802]: New machine qemu-5-instance-00000005.
Nov 29 15:41:20 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.157 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430881.1560895, 89d41ab5-c0e8-4371-b48e-c118019b2a97 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.159 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.160 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.161 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.165 189489 INFO nova.virt.libvirt.driver [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Instance spawned successfully.#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.165 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.191 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.197 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.201 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.201 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.202 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.202 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.202 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.203 189489 DEBUG nova.virt.libvirt.driver [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.243 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.244 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764430881.1584642, 89d41ab5-c0e8-4371-b48e-c118019b2a97 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.244 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] VM Started (Lifecycle Event)#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.272 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.278 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.283 189489 INFO nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Took 4.11 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.283 189489 DEBUG nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.301 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.339 189489 INFO nova.compute.manager [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Took 4.62 seconds to build instance.#033[00m
Nov 29 15:41:21 compute-0 nova_compute[189485]: 2025-11-29 15:41:21.367 189489 DEBUG oslo_concurrency.lockutils [None req-865a4de4-660d-4646-a677-a8c90cfbdf4d 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "89d41ab5-c0e8-4371-b48e-c118019b2a97" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:22 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 15:41:22 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 15:41:23 compute-0 nova_compute[189485]: 2025-11-29 15:41:23.076 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:25 compute-0 nova_compute[189485]: 2025-11-29 15:41:25.260 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:28 compute-0 nova_compute[189485]: 2025-11-29 15:41:28.079 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:29 compute-0 podman[203677]: time="2025-11-29T15:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:41:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:41:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Nov 29 15:41:30 compute-0 nova_compute[189485]: 2025-11-29 15:41:30.263 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: ERROR   15:41:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: ERROR   15:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: ERROR   15:41:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: ERROR   15:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:41:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:41:32 compute-0 podman[246578]: 2025-11-29 15:41:32.625915977 +0000 UTC m=+0.079938239 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:41:33 compute-0 nova_compute[189485]: 2025-11-29 15:41:33.082 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:35 compute-0 nova_compute[189485]: 2025-11-29 15:41:35.266 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:38 compute-0 nova_compute[189485]: 2025-11-29 15:41:38.086 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.270 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.667 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "89d41ab5-c0e8-4371-b48e-c118019b2a97" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.668 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "89d41ab5-c0e8-4371-b48e-c118019b2a97" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.668 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "89d41ab5-c0e8-4371-b48e-c118019b2a97-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.669 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "89d41ab5-c0e8-4371-b48e-c118019b2a97-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.670 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "89d41ab5-c0e8-4371-b48e-c118019b2a97-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.672 189489 INFO nova.compute.manager [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Terminating instance#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.674 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "refresh_cache-89d41ab5-c0e8-4371-b48e-c118019b2a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.674 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquired lock "refresh_cache-89d41ab5-c0e8-4371-b48e-c118019b2a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.677 189489 DEBUG nova.network.neutron [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:41:40 compute-0 nova_compute[189485]: 2025-11-29 15:41:40.855 189489 DEBUG nova.network.neutron [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:41:41 compute-0 podman[246600]: 2025-11-29 15:41:41.672531893 +0000 UTC m=+0.115820313 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.020 189489 DEBUG nova.network.neutron [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.044 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Releasing lock "refresh_cache-89d41ab5-c0e8-4371-b48e-c118019b2a97" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.045 189489 DEBUG nova.compute.manager [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:41:42 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Nov 29 15:41:42 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 21.765s CPU time.
Nov 29 15:41:42 compute-0 systemd-machined[155802]: Machine qemu-5-instance-00000005 terminated.
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.319 189489 INFO nova.virt.libvirt.driver [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Instance destroyed successfully.#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.319 189489 DEBUG nova.objects.instance [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'resources' on Instance uuid 89d41ab5-c0e8-4371-b48e-c118019b2a97 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.343 189489 INFO nova.virt.libvirt.driver [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Deleting instance files /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97_del#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.344 189489 INFO nova.virt.libvirt.driver [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Deletion of /var/lib/nova/instances/89d41ab5-c0e8-4371-b48e-c118019b2a97_del complete#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.399 189489 INFO nova.compute.manager [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Took 0.35 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.400 189489 DEBUG oslo.service.loopingcall [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.400 189489 DEBUG nova.compute.manager [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:41:42 compute-0 nova_compute[189485]: 2025-11-29 15:41:42.400 189489 DEBUG nova.network.neutron [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.015 189489 DEBUG nova.network.neutron [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.029 189489 DEBUG nova.network.neutron [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.044 189489 INFO nova.compute.manager [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Took 0.64 seconds to deallocate network for instance.#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.091 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.117 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.118 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.259 189489 DEBUG nova.compute.provider_tree [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.277 189489 DEBUG nova.scheduler.client.report [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.305 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.187s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.365 189489 INFO nova.scheduler.client.report [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Deleted allocations for instance 89d41ab5-c0e8-4371-b48e-c118019b2a97#033[00m
Nov 29 15:41:43 compute-0 nova_compute[189485]: 2025-11-29 15:41:43.447 189489 DEBUG oslo_concurrency.lockutils [None req-7bd3c986-2575-4323-88a4-17b4438db4af 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "89d41ab5-c0e8-4371-b48e-c118019b2a97" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.779s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:43 compute-0 podman[246635]: 2025-11-29 15:41:43.637230868 +0000 UTC m=+0.076069235 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 15:41:43 compute-0 podman[246634]: 2025-11-29 15:41:43.645343146 +0000 UTC m=+0.088625602 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:41:43 compute-0 podman[246637]: 2025-11-29 15:41:43.668395906 +0000 UTC m=+0.092991270 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 15:41:43 compute-0 podman[246633]: 2025-11-29 15:41:43.688535307 +0000 UTC m=+0.129317075 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, release-0.7.12=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, architecture=x86_64, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Nov 29 15:41:43 compute-0 podman[246636]: 2025-11-29 15:41:43.706867759 +0000 UTC m=+0.138513402 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 15:41:45 compute-0 nova_compute[189485]: 2025-11-29 15:41:45.272 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:46 compute-0 podman[246729]: 2025-11-29 15:41:46.659969093 +0000 UTC m=+0.102653228 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:41:48 compute-0 nova_compute[189485]: 2025-11-29 15:41:48.094 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:49 compute-0 podman[246750]: 2025-11-29 15:41:49.640575437 +0000 UTC m=+0.079634821 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:41:50 compute-0 nova_compute[189485]: 2025-11-29 15:41:50.277 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:53 compute-0 nova_compute[189485]: 2025-11-29 15:41:53.099 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:55 compute-0 nova_compute[189485]: 2025-11-29 15:41:55.281 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:57 compute-0 nova_compute[189485]: 2025-11-29 15:41:57.316 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764430902.3143227, 89d41ab5-c0e8-4371-b48e-c118019b2a97 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:41:57 compute-0 nova_compute[189485]: 2025-11-29 15:41:57.317 189489 INFO nova.compute.manager [-] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:41:57 compute-0 nova_compute[189485]: 2025-11-29 15:41:57.344 189489 DEBUG nova.compute.manager [None req-7123175c-8019-40ca-a053-2a854f80df09 - - - - - -] [instance: 89d41ab5-c0e8-4371-b48e-c118019b2a97] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:41:58 compute-0 nova_compute[189485]: 2025-11-29 15:41:58.101 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:41:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:41:59.188 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:41:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:41:59.189 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:41:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:41:59.190 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:41:59 compute-0 podman[203677]: time="2025-11-29T15:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:41:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:41:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Nov 29 15:42:00 compute-0 nova_compute[189485]: 2025-11-29 15:42:00.283 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:00 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Nov 29 15:42:00 compute-0 systemd[1]: session-29.scope: Consumed 1.221s CPU time.
Nov 29 15:42:00 compute-0 systemd-logind[794]: Session 29 logged out. Waiting for processes to exit.
Nov 29 15:42:00 compute-0 systemd-logind[794]: Removed session 29.
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.054 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.055 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.056 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.065 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'name': 'vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.070 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.070 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.070 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.071 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.071 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.071 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:42:01.071137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.077 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.082 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.083 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.084 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.084 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.084 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.085 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.085 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.085 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.085 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:42:01.083765) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:42:01.085275) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.126 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.170 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.172 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.173 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.173 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.175 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.176 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.177 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:42:01.172996) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.177 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.181 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.181 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:42:01.176915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:42:01.180779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.185 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:42:01.183543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.303 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.304 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.304 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.380 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.380 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.380 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.381 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.381 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.382 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.382 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:42:01.381852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:42:01.383182) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.405 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.406 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.406 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: ERROR   15:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: ERROR   15:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: ERROR   15:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: ERROR   15:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:42:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.434 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.434 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.435 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.436 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.437 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/cpu volume: 36250000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.437 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 45120000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.438 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.438 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.438 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.438 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.439 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.439 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 489570269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.439 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 78552201 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.439 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 63090868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:42:01.436789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.440 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:42:01.438992) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.440 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.441 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.441 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.442 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.442 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.442 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.442 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.443 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.443 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.443 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.443 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.444 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.444 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:42:01.443341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.444 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.446 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.446 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.447 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.448 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.448 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:42:01.447365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.449 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.449 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.449 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.450 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.450 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.450 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.450 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.451 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.451 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.451 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.451 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.452 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:42:01.451057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.452 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.452 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.452 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.453 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 1406170011 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.454 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 9552907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.454 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.454 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:42:01.453794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.454 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.455 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.455 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:42:01.456481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.457 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.457 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.458 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.458 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:42:01.458003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.458 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.459 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.459 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.459 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.460 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.461 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:42:01.460526) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.461 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.462 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:42:01.462240) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.463 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:42:01.463497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.464 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.464 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.464 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.464 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.465 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.465 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.466 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.466 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:42:01.465963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.466 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.466 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.467 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.467 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.467 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.467 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.467 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:42:01.467440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.468 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.469 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.469 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:42:01.468634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.469 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.469 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.470 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.470 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.470 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.470 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.470 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.470 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.471 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:42:01.470305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:42:01.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:42:03 compute-0 nova_compute[189485]: 2025-11-29 15:42:03.103 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:03 compute-0 podman[246773]: 2025-11-29 15:42:03.691504949 +0000 UTC m=+0.120558980 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:42:05 compute-0 nova_compute[189485]: 2025-11-29 15:42:05.287 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:05 compute-0 nova_compute[189485]: 2025-11-29 15:42:05.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:05 compute-0 nova_compute[189485]: 2025-11-29 15:42:05.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:42:05 compute-0 nova_compute[189485]: 2025-11-29 15:42:05.512 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:42:07 compute-0 nova_compute[189485]: 2025-11-29 15:42:07.513 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:08 compute-0 nova_compute[189485]: 2025-11-29 15:42:08.105 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:10 compute-0 nova_compute[189485]: 2025-11-29 15:42:10.290 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:10 compute-0 nova_compute[189485]: 2025-11-29 15:42:10.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:10 compute-0 nova_compute[189485]: 2025-11-29 15:42:10.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:42:11 compute-0 nova_compute[189485]: 2025-11-29 15:42:11.066 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:42:11 compute-0 nova_compute[189485]: 2025-11-29 15:42:11.067 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:42:11 compute-0 nova_compute[189485]: 2025-11-29 15:42:11.067 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:42:12 compute-0 podman[246795]: 2025-11-29 15:42:12.622975418 +0000 UTC m=+0.077373800 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm)
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.637 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.666 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.667 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.667 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.667 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.667 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.703 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.704 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.704 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.704 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.807 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.906 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.908 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.991 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:12 compute-0 nova_compute[189485]: 2025-11-29 15:42:12.992 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.066 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.067 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.107 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.124 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.132 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.190 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.203 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.269 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.270 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.334 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.336 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.409 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.851 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.854 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4857MB free_disk=72.33328628540039GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.855 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:42:13 compute-0 nova_compute[189485]: 2025-11-29 15:42:13.856 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.134 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.134 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.135 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.135 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.192 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.269 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.270 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.287 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.337 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.425 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.451 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.484 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:42:14 compute-0 nova_compute[189485]: 2025-11-29 15:42:14.484 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.629s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:42:14 compute-0 podman[246839]: 2025-11-29 15:42:14.624954887 +0000 UTC m=+0.075858739 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 15:42:14 compute-0 podman[246842]: 2025-11-29 15:42:14.65782599 +0000 UTC m=+0.095056095 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Nov 29 15:42:14 compute-0 podman[246838]: 2025-11-29 15:42:14.667292384 +0000 UTC m=+0.117730814 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, vcs-type=git, architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, name=ubi9)
Nov 29 15:42:14 compute-0 podman[246840]: 2025-11-29 15:42:14.679956714 +0000 UTC m=+0.118149545 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:42:14 compute-0 podman[246841]: 2025-11-29 15:42:14.699225192 +0000 UTC m=+0.142356126 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:42:15 compute-0 nova_compute[189485]: 2025-11-29 15:42:15.293 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:16 compute-0 nova_compute[189485]: 2025-11-29 15:42:16.301 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:16 compute-0 nova_compute[189485]: 2025-11-29 15:42:16.302 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:16 compute-0 nova_compute[189485]: 2025-11-29 15:42:16.303 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:17 compute-0 podman[246937]: 2025-11-29 15:42:17.659689373 +0000 UTC m=+0.102294251 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, container_name=multipathd, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:42:18 compute-0 nova_compute[189485]: 2025-11-29 15:42:18.110 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:18 compute-0 systemd-logind[794]: New session 30 of user zuul.
Nov 29 15:42:18 compute-0 systemd[1]: Started Session 30 of User zuul.
Nov 29 15:42:19 compute-0 python3[247136]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:42:19 compute-0 nova_compute[189485]: 2025-11-29 15:42:19.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:19 compute-0 nova_compute[189485]: 2025-11-29 15:42:19.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:42:20 compute-0 nova_compute[189485]: 2025-11-29 15:42:20.296 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:20 compute-0 nova_compute[189485]: 2025-11-29 15:42:20.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:20 compute-0 podman[247174]: 2025-11-29 15:42:20.656717905 +0000 UTC m=+0.100546823 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:42:23 compute-0 nova_compute[189485]: 2025-11-29 15:42:23.111 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:25 compute-0 nova_compute[189485]: 2025-11-29 15:42:25.298 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:27 compute-0 python3[247372]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:42:28 compute-0 nova_compute[189485]: 2025-11-29 15:42:28.114 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:28 compute-0 nova_compute[189485]: 2025-11-29 15:42:28.502 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:42:28 compute-0 nova_compute[189485]: 2025-11-29 15:42:28.503 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:42:29 compute-0 podman[203677]: time="2025-11-29T15:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:42:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:42:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 29 15:42:30 compute-0 nova_compute[189485]: 2025-11-29 15:42:30.300 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: ERROR   15:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: ERROR   15:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: ERROR   15:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: ERROR   15:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:42:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:42:33 compute-0 nova_compute[189485]: 2025-11-29 15:42:33.116 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:34 compute-0 podman[247410]: 2025-11-29 15:42:34.673153182 +0000 UTC m=+0.100556643 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:42:35 compute-0 nova_compute[189485]: 2025-11-29 15:42:35.303 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:36 compute-0 python3[247607]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:42:38 compute-0 nova_compute[189485]: 2025-11-29 15:42:38.118 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:40 compute-0 nova_compute[189485]: 2025-11-29 15:42:40.306 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:43 compute-0 nova_compute[189485]: 2025-11-29 15:42:43.121 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:43 compute-0 podman[247647]: 2025-11-29 15:42:43.682157917 +0000 UTC m=+0.113827357 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 15:42:44 compute-0 podman[247666]: 2025-11-29 15:42:44.830804356 +0000 UTC m=+0.103269775 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., vcs-type=git, version=9.6)
Nov 29 15:42:44 compute-0 podman[247665]: 2025-11-29 15:42:44.832710157 +0000 UTC m=+0.113268932 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Nov 29 15:42:44 compute-0 podman[247664]: 2025-11-29 15:42:44.841194644 +0000 UTC m=+0.117448544 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Nov 29 15:42:44 compute-0 podman[247681]: 2025-11-29 15:42:44.876835434 +0000 UTC m=+0.127068710 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 29 15:42:44 compute-0 podman[247688]: 2025-11-29 15:42:44.886213474 +0000 UTC m=+0.125947150 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:42:45 compute-0 nova_compute[189485]: 2025-11-29 15:42:45.308 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:48 compute-0 nova_compute[189485]: 2025-11-29 15:42:48.125 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:48 compute-0 podman[247759]: 2025-11-29 15:42:48.693586045 +0000 UTC m=+0.128526968 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 15:42:50 compute-0 nova_compute[189485]: 2025-11-29 15:42:50.311 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:51 compute-0 podman[247815]: 2025-11-29 15:42:51.666995763 +0000 UTC m=+0.120107054 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:42:52 compute-0 python3[247977]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Nov 29 15:42:53 compute-0 nova_compute[189485]: 2025-11-29 15:42:53.128 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:55 compute-0 nova_compute[189485]: 2025-11-29 15:42:55.314 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:58 compute-0 nova_compute[189485]: 2025-11-29 15:42:58.133 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:42:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:42:59.189 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:42:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:42:59.190 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:42:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:42:59.191 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:42:59 compute-0 podman[203677]: time="2025-11-29T15:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:42:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:42:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4787 "" "Go-http-client/1.1"
Nov 29 15:43:00 compute-0 nova_compute[189485]: 2025-11-29 15:43:00.317 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: ERROR   15:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: ERROR   15:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: ERROR   15:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: ERROR   15:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:43:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:43:03 compute-0 nova_compute[189485]: 2025-11-29 15:43:03.137 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:05 compute-0 nova_compute[189485]: 2025-11-29 15:43:05.320 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:05 compute-0 podman[248019]: 2025-11-29 15:43:05.710147815 +0000 UTC m=+0.139054481 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:43:08 compute-0 nova_compute[189485]: 2025-11-29 15:43:08.140 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:08 compute-0 nova_compute[189485]: 2025-11-29 15:43:08.497 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:10 compute-0 nova_compute[189485]: 2025-11-29 15:43:10.323 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:11 compute-0 nova_compute[189485]: 2025-11-29 15:43:11.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:11 compute-0 nova_compute[189485]: 2025-11-29 15:43:11.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:43:11 compute-0 nova_compute[189485]: 2025-11-29 15:43:11.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:43:12 compute-0 nova_compute[189485]: 2025-11-29 15:43:12.090 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:43:12 compute-0 nova_compute[189485]: 2025-11-29 15:43:12.091 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:43:12 compute-0 nova_compute[189485]: 2025-11-29 15:43:12.091 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:43:12 compute-0 nova_compute[189485]: 2025-11-29 15:43:12.091 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:43:13 compute-0 nova_compute[189485]: 2025-11-29 15:43:13.143 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:13 compute-0 nova_compute[189485]: 2025-11-29 15:43:13.652 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:43:13 compute-0 nova_compute[189485]: 2025-11-29 15:43:13.690 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:43:13 compute-0 nova_compute[189485]: 2025-11-29 15:43:13.692 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:43:13 compute-0 nova_compute[189485]: 2025-11-29 15:43:13.693 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.535 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.536 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.536 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.537 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:43:14 compute-0 podman[248044]: 2025-11-29 15:43:14.644498972 +0000 UTC m=+0.091172653 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.662 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.727 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.729 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.789 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.791 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.859 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.861 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.953 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:14 compute-0 nova_compute[189485]: 2025-11-29 15:43:14.966 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.051 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.053 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.147 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.150 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.214 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.218 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.281 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.326 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:15 compute-0 podman[248089]: 2025-11-29 15:43:15.654745568 +0000 UTC m=+0.089106927 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:43:15 compute-0 podman[248090]: 2025-11-29 15:43:15.665460815 +0000 UTC m=+0.095213611 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:43:15 compute-0 podman[248088]: 2025-11-29 15:43:15.678966445 +0000 UTC m=+0.115649496 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm, release=1214.1726694543, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, io.openshift.expose-services=, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 29 15:43:15 compute-0 podman[248097]: 2025-11-29 15:43:15.682447718 +0000 UTC m=+0.098223301 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, name=ubi9-minimal, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, io.openshift.tags=minimal rhel9)
Nov 29 15:43:15 compute-0 podman[248091]: 2025-11-29 15:43:15.707798764 +0000 UTC m=+0.131768516 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.737 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.739 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4837MB free_disk=72.333251953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.739 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.739 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.845 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.846 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.846 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.846 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.921 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.936 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.939 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:43:15 compute-0 nova_compute[189485]: 2025-11-29 15:43:15.939 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:43:17 compute-0 nova_compute[189485]: 2025-11-29 15:43:17.934 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:17 compute-0 nova_compute[189485]: 2025-11-29 15:43:17.935 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:18 compute-0 nova_compute[189485]: 2025-11-29 15:43:18.145 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:19 compute-0 podman[248188]: 2025-11-29 15:43:19.635040666 +0000 UTC m=+0.080846128 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:43:20 compute-0 nova_compute[189485]: 2025-11-29 15:43:20.328 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:20 compute-0 nova_compute[189485]: 2025-11-29 15:43:20.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:20 compute-0 nova_compute[189485]: 2025-11-29 15:43:20.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:43:22 compute-0 podman[248208]: 2025-11-29 15:43:22.668284283 +0000 UTC m=+0.117000982 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:43:22 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 15:43:23 compute-0 nova_compute[189485]: 2025-11-29 15:43:23.147 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:23 compute-0 nova_compute[189485]: 2025-11-29 15:43:23.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:43:25 compute-0 nova_compute[189485]: 2025-11-29 15:43:25.330 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:28 compute-0 nova_compute[189485]: 2025-11-29 15:43:28.150 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:29 compute-0 podman[203677]: time="2025-11-29T15:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:43:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:43:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Nov 29 15:43:30 compute-0 nova_compute[189485]: 2025-11-29 15:43:30.332 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: ERROR   15:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: ERROR   15:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: ERROR   15:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: ERROR   15:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:43:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:43:33 compute-0 nova_compute[189485]: 2025-11-29 15:43:33.151 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:35 compute-0 nova_compute[189485]: 2025-11-29 15:43:35.334 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:36 compute-0 podman[248231]: 2025-11-29 15:43:36.649432034 +0000 UTC m=+0.096167796 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:43:38 compute-0 nova_compute[189485]: 2025-11-29 15:43:38.154 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:40 compute-0 nova_compute[189485]: 2025-11-29 15:43:40.336 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:43 compute-0 nova_compute[189485]: 2025-11-29 15:43:43.156 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:44 compute-0 podman[248253]: 2025-11-29 15:43:44.801596665 +0000 UTC m=+0.087051223 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 15:43:45 compute-0 nova_compute[189485]: 2025-11-29 15:43:45.339 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:46 compute-0 podman[248273]: 2025-11-29 15:43:46.677180074 +0000 UTC m=+0.112347928 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:43:46 compute-0 podman[248271]: 2025-11-29 15:43:46.679340001 +0000 UTC m=+0.110191980 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, release=1214.1726694543, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Nov 29 15:43:46 compute-0 podman[248275]: 2025-11-29 15:43:46.689932414 +0000 UTC m=+0.099671419 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 29 15:43:46 compute-0 podman[248272]: 2025-11-29 15:43:46.697610769 +0000 UTC m=+0.126325601 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 15:43:46 compute-0 podman[248274]: 2025-11-29 15:43:46.71638745 +0000 UTC m=+0.145195435 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:43:48 compute-0 nova_compute[189485]: 2025-11-29 15:43:48.160 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:50 compute-0 nova_compute[189485]: 2025-11-29 15:43:50.345 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:50 compute-0 podman[248368]: 2025-11-29 15:43:50.674910895 +0000 UTC m=+0.114689430 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:43:52 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Nov 29 15:43:52 compute-0 systemd[1]: session-30.scope: Consumed 4.576s CPU time.
Nov 29 15:43:52 compute-0 systemd-logind[794]: Session 30 logged out. Waiting for processes to exit.
Nov 29 15:43:52 compute-0 systemd-logind[794]: Removed session 30.
Nov 29 15:43:53 compute-0 nova_compute[189485]: 2025-11-29 15:43:53.162 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:53 compute-0 podman[248388]: 2025-11-29 15:43:53.677373091 +0000 UTC m=+0.112639096 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:43:55 compute-0 nova_compute[189485]: 2025-11-29 15:43:55.349 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:55 compute-0 rsyslogd[236931]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Nov 29 15:43:58 compute-0 nova_compute[189485]: 2025-11-29 15:43:58.166 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:43:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:43:59.191 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:43:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:43:59.192 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:43:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:43:59.192 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:43:59 compute-0 podman[203677]: time="2025-11-29T15:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:43:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:43:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Nov 29 15:44:00 compute-0 nova_compute[189485]: 2025-11-29 15:44:00.350 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.055 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.055 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.055 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.056 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.062 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'name': 'vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {'metering.server_group': 'cf461906-40b9-4ac3-86c2-0d606dd14d99'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.066 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'name': 'test_0', 'flavor': {'id': '34af94d1-a6e1-4bf0-8957-036dc948fe9d', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': 'a4b79580-904f-4527-8cf1-3888cf1ff785'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '04d676205d9142d19f3d4ce7389f72a2', 'user_id': '5cbf094e2197487fbe16a0fe6e3076ba', 'hostId': '3d9e625461753da7712b398dbee4a211088f5f191b13d601f4d29f17', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.067 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.067 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.067 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.068 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.069 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:44:01.068101) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.075 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.082 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.083 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.083 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.084 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.085 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.085 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.086 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.086 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:44:01.084272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.086 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:44:01.087246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.118 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/memory.usage volume: 49.07421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.149 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.150 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.152 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:44:01.151416) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.152 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.153 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.154 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:44:01.154784) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.155 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.155 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.157 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.159 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.159 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:44:01.158547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.161 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:44:01.161758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.236 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.237 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.237 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.321 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.322 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.323 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.325 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.326 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:44:01.325715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.327 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.328 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:44:01.329041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.365 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.366 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.367 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.400 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.401 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.401 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.403 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.404 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.404 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/cpu volume: 37930000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.404 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/cpu volume: 46770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.405 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.406 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:44:01.404049) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.407 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.409 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 489570269 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:44:01.408797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.410 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 78552201 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.411 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.latency volume: 63090868 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.412 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 438919382 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.413 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 78450849 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.414 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.latency volume: 56135598 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.415 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.416 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.417 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.417 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.417 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: ERROR   15:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: ERROR   15:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: ERROR   15:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:44:01.418546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.419 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: ERROR   15:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:44:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.420 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.421 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.422 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.424 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.425 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.427 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.428 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.428 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.428 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.429 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.430 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.431 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.431 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:44:01.427920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.436 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:44:01.436027) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.437 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.438 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.438 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.439 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.440 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.441 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.441 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.442 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.442 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.442 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.443 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.443 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 1406170011 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.443 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:44:01.443006) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.444 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 9552907 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.444 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 1352984368 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.445 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 12116045 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.446 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.447 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.447 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.447 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.448 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.448 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.449 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.450 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.450 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.451 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.451 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.451 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:44:01.448476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.451 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.451 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.452 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.452 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.453 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:44:01.452131) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.453 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.453 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.453 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.454 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.455 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.455 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.455 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.455 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.455 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.456 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.456 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:44:01.454891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.457 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.458 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 22159360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.458 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.458 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.458 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.459 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:44:01.456372) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.460 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:44:01.457272) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:44:01.459309) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:44:01.460563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.461 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:44:01.461720) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.462 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.463 14 DEBUG ceilometer.compute.pollsters [-] dd0fdf5e-41d6-4c60-a546-112da1f37416/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.463 14 DEBUG ceilometer.compute.pollsters [-] b5d60fb8-b63e-4b0a-b908-00453be8ce37/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.463 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.464 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.465 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.466 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:44:01.462965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.467 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.468 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:44:01.469 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:44:03 compute-0 nova_compute[189485]: 2025-11-29 15:44:03.168 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:05 compute-0 nova_compute[189485]: 2025-11-29 15:44:05.353 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:07 compute-0 podman[248412]: 2025-11-29 15:44:07.662185815 +0000 UTC m=+0.104676043 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:44:08 compute-0 nova_compute[189485]: 2025-11-29 15:44:08.171 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:09 compute-0 nova_compute[189485]: 2025-11-29 15:44:09.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:10 compute-0 nova_compute[189485]: 2025-11-29 15:44:10.356 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:11 compute-0 nova_compute[189485]: 2025-11-29 15:44:11.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:11 compute-0 nova_compute[189485]: 2025-11-29 15:44:11.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:44:12 compute-0 nova_compute[189485]: 2025-11-29 15:44:12.113 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:44:12 compute-0 nova_compute[189485]: 2025-11-29 15:44:12.114 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:44:12 compute-0 nova_compute[189485]: 2025-11-29 15:44:12.115 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:44:13 compute-0 nova_compute[189485]: 2025-11-29 15:44:13.175 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:13 compute-0 nova_compute[189485]: 2025-11-29 15:44:13.555 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:44:13 compute-0 nova_compute[189485]: 2025-11-29 15:44:13.574 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:44:13 compute-0 nova_compute[189485]: 2025-11-29 15:44:13.574 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.359 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.525 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.525 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.644 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:15 compute-0 podman[248434]: 2025-11-29 15:44:15.690029873 +0000 UTC m=+0.123443233 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.738 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.740 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.812 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.815 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.876 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.878 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.938 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:15 compute-0 nova_compute[189485]: 2025-11-29 15:44:15.951 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.022 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.024 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.079 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.081 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.148 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.150 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.214 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.761 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.763 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4859MB free_disk=72.333251953125GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.763 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.764 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.893 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance b5d60fb8-b63e-4b0a-b908-00453be8ce37 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.894 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance dd0fdf5e-41d6-4c60-a546-112da1f37416 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.894 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:44:16 compute-0 nova_compute[189485]: 2025-11-29 15:44:16.895 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:44:17 compute-0 nova_compute[189485]: 2025-11-29 15:44:17.027 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:44:17 compute-0 nova_compute[189485]: 2025-11-29 15:44:17.041 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:44:17 compute-0 nova_compute[189485]: 2025-11-29 15:44:17.043 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:44:17 compute-0 nova_compute[189485]: 2025-11-29 15:44:17.044 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.280s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:17 compute-0 podman[248479]: 2025-11-29 15:44:17.683235099 +0000 UTC m=+0.114727151 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 15:44:17 compute-0 podman[248478]: 2025-11-29 15:44:17.685305174 +0000 UTC m=+0.122025976 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, container_name=kepler, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, release=1214.1726694543, vcs-type=git, version=9.4)
Nov 29 15:44:17 compute-0 podman[248492]: 2025-11-29 15:44:17.685508089 +0000 UTC m=+0.097058750 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, container_name=openstack_network_exporter, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=)
Nov 29 15:44:17 compute-0 podman[248484]: 2025-11-29 15:44:17.708240476 +0000 UTC m=+0.111050584 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:44:17 compute-0 podman[248491]: 2025-11-29 15:44:17.717556844 +0000 UTC m=+0.126667849 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 15:44:18 compute-0 nova_compute[189485]: 2025-11-29 15:44:18.041 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:18 compute-0 nova_compute[189485]: 2025-11-29 15:44:18.178 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:19 compute-0 nova_compute[189485]: 2025-11-29 15:44:19.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:20 compute-0 nova_compute[189485]: 2025-11-29 15:44:20.363 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:21 compute-0 nova_compute[189485]: 2025-11-29 15:44:21.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:44:21 compute-0 nova_compute[189485]: 2025-11-29 15:44:21.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:44:21 compute-0 podman[248574]: 2025-11-29 15:44:21.69598171 +0000 UTC m=+0.132454604 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3, config_id=multipathd)
Nov 29 15:44:23 compute-0 nova_compute[189485]: 2025-11-29 15:44:23.181 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:24 compute-0 podman[248594]: 2025-11-29 15:44:24.657755589 +0000 UTC m=+0.098348794 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:44:25 compute-0 nova_compute[189485]: 2025-11-29 15:44:25.365 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:28 compute-0 nova_compute[189485]: 2025-11-29 15:44:28.184 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:29 compute-0 podman[203677]: time="2025-11-29T15:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:44:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:44:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4788 "" "Go-http-client/1.1"
Nov 29 15:44:30 compute-0 nova_compute[189485]: 2025-11-29 15:44:30.367 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: ERROR   15:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: ERROR   15:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: ERROR   15:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: ERROR   15:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:44:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:44:33 compute-0 nova_compute[189485]: 2025-11-29 15:44:33.187 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:35 compute-0 nova_compute[189485]: 2025-11-29 15:44:35.370 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:38 compute-0 nova_compute[189485]: 2025-11-29 15:44:38.190 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:38 compute-0 podman[248617]: 2025-11-29 15:44:38.653718235 +0000 UTC m=+0.096374862 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:44:40 compute-0 nova_compute[189485]: 2025-11-29 15:44:40.373 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:43 compute-0 nova_compute[189485]: 2025-11-29 15:44:43.192 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:45 compute-0 nova_compute[189485]: 2025-11-29 15:44:45.376 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:46 compute-0 podman[248641]: 2025-11-29 15:44:46.657024576 +0000 UTC m=+0.091829660 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true)
Nov 29 15:44:48 compute-0 nova_compute[189485]: 2025-11-29 15:44:48.232 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:48 compute-0 podman[248661]: 2025-11-29 15:44:48.631693964 +0000 UTC m=+0.073591644 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:44:48 compute-0 podman[248663]: 2025-11-29 15:44:48.67502085 +0000 UTC m=+0.098923520 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_id=edpm, release=1755695350, version=9.6, managed_by=edpm_ansible, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:44:48 compute-0 podman[248660]: 2025-11-29 15:44:48.678494242 +0000 UTC m=+0.118148772 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:44:48 compute-0 podman[248659]: 2025-11-29 15:44:48.696259986 +0000 UTC m=+0.126119324 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, release-0.7.12=, architecture=x86_64, name=ubi9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 29 15:44:48 compute-0 podman[248662]: 2025-11-29 15:44:48.742838399 +0000 UTC m=+0.164830238 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 15:44:50 compute-0 nova_compute[189485]: 2025-11-29 15:44:50.378 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:52 compute-0 podman[248764]: 2025-11-29 15:44:52.689335361 +0000 UTC m=+0.139093612 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:44:53 compute-0 nova_compute[189485]: 2025-11-29 15:44:53.237 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:55 compute-0 nova_compute[189485]: 2025-11-29 15:44:55.379 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:55 compute-0 podman[248784]: 2025-11-29 15:44:55.618638725 +0000 UTC m=+0.074747266 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:44:55 compute-0 nova_compute[189485]: 2025-11-29 15:44:55.815 189489 DEBUG nova.compute.manager [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-changed-990859f2-5f64-4a2a-9f1d-694b0d52b155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:44:55 compute-0 nova_compute[189485]: 2025-11-29 15:44:55.816 189489 DEBUG nova.compute.manager [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Refreshing instance network info cache due to event network-changed-990859f2-5f64-4a2a-9f1d-694b0d52b155. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:44:55 compute-0 nova_compute[189485]: 2025-11-29 15:44:55.816 189489 DEBUG oslo_concurrency.lockutils [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:44:55 compute-0 nova_compute[189485]: 2025-11-29 15:44:55.816 189489 DEBUG oslo_concurrency.lockutils [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:44:55 compute-0 nova_compute[189485]: 2025-11-29 15:44:55.817 189489 DEBUG nova.network.neutron [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Refreshing network info cache for port 990859f2-5f64-4a2a-9f1d-694b0d52b155 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.137 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.138 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.138 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.139 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.139 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.141 189489 INFO nova.compute.manager [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Terminating instance#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.142 189489 DEBUG nova.compute.manager [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.170 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.172 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.173 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.176 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:44:56 compute-0 kernel: tap990859f2-5f (unregistering): left promiscuous mode
Nov 29 15:44:56 compute-0 NetworkManager[56360]: <info>  [1764431096.1991] device (tap990859f2-5f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:44:56 compute-0 ovn_controller[97827]: 2025-11-29T15:44:56Z|00058|binding|INFO|Releasing lport 990859f2-5f64-4a2a-9f1d-694b0d52b155 from this chassis (sb_readonly=0)
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.218 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_controller[97827]: 2025-11-29T15:44:56Z|00059|binding|INFO|Setting lport 990859f2-5f64-4a2a-9f1d-694b0d52b155 down in Southbound
Nov 29 15:44:56 compute-0 ovn_controller[97827]: 2025-11-29T15:44:56Z|00060|binding|INFO|Removing iface tap990859f2-5f ovn-installed in OVS
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.223 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.233 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:96:c1:c2 192.168.0.225'], port_security=['fa:16:3e:96:c1:c2 192.168.0.225'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-nju3ymh64jso-he4f6zydsa2j-l6hxu724o2mv-port-fyvusaifittf', 'neutron:cidrs': '192.168.0.225/24', 'neutron:device_id': 'dd0fdf5e-41d6-4c60-a546-112da1f37416', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-nju3ymh64jso-he4f6zydsa2j-l6hxu724o2mv-port-fyvusaifittf', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=990859f2-5f64-4a2a-9f1d-694b0d52b155) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.235 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 990859f2-5f64-4a2a-9f1d-694b0d52b155 in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 unbound from our chassis#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.238 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network fa63adc8-00c5-408f-a9a0-653db4d11058#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.241 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.265 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[973fd68d-b0a1-47b5-855e-be23b7c27da5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:44:56 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Nov 29 15:44:56 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 58.912s CPU time.
Nov 29 15:44:56 compute-0 systemd-machined[155802]: Machine qemu-4-instance-00000004 terminated.
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.312 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[967af7e2-6c43-4828-9f70-98b6a8255afa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.315 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[ce341979-47f3-4363-ab03-4e961eebfa88]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.350 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[abe7c5b9-1d08-4c8d-9812-9a8bcead0924]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.372 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[36b2cb7d-1711-4171-8a3a-87df2c8a7f54]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapfa63adc8-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:5d:9e:29'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 15, 'rx_bytes': 532, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373724, 'reachable_time': 26817, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248820, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.375 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.381 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.395 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[66058384-e8f7-42d2-8b16-6243d051be34]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373741, 'tstamp': 373741}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248826, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapfa63adc8-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 373746, 'tstamp': 373746}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248826, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.397 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.399 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.408 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfa63adc8-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.408 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.408 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.409 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapfa63adc8-00, col_values=(('external_ids', {'iface-id': 'e36df9a9-fba2-436d-a18e-320b39f26f3c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:44:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:56.409 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.434 189489 INFO nova.virt.libvirt.driver [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Instance destroyed successfully.#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.435 189489 DEBUG nova.objects.instance [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'resources' on Instance uuid dd0fdf5e-41d6-4c60-a546-112da1f37416 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.449 189489 DEBUG nova.virt.libvirt.vif [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:34:43Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-mh64jso-he4f6zydsa2j-l6hxu724o2mv-vnf-rlelz4fnk4me',id=4,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:34:55Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='cf461906-40b9-4ac3-86c2-0d606dd14d99'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-saogslav',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader,admin',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:34:55Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzU3NjQ5NjExNDQzNDM5MDQzOD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Nov 29 15:44:56 compute-0 nova_compute[189485]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzU3NjQ5NjExNDQzNDM5MDQzOD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTc1NzY0OTYxMTQ0MzQzOTA0Mzg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03NTc2NDk2MTE0NDM0MzkwNDM4PT0tLQo=',user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=dd0fdf5e-41d6-4c60-a546-112da1f37416,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.450 189489 DEBUG nova.network.os_vif_util [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.224", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.451 189489 DEBUG nova.network.os_vif_util [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.451 189489 DEBUG os_vif [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.453 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.453 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap990859f2-5f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.455 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.456 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.459 189489 INFO os_vif [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:96:c1:c2,bridge_name='br-int',has_traffic_filtering=True,id=990859f2-5f64-4a2a-9f1d-694b0d52b155,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap990859f2-5f')#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.459 189489 INFO nova.virt.libvirt.driver [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Deleting instance files /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416_del#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.460 189489 INFO nova.virt.libvirt.driver [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Deletion of /var/lib/nova/instances/dd0fdf5e-41d6-4c60-a546-112da1f37416_del complete#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.466 189489 DEBUG nova.compute.manager [req-6d690598-e82c-42f2-9e4b-a8892c33c0b0 req-961a1f5e-7deb-4a97-ab17-fa2f22ef53eb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-vif-unplugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.466 189489 DEBUG oslo_concurrency.lockutils [req-6d690598-e82c-42f2-9e4b-a8892c33c0b0 req-961a1f5e-7deb-4a97-ab17-fa2f22ef53eb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.466 189489 DEBUG oslo_concurrency.lockutils [req-6d690598-e82c-42f2-9e4b-a8892c33c0b0 req-961a1f5e-7deb-4a97-ab17-fa2f22ef53eb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.466 189489 DEBUG oslo_concurrency.lockutils [req-6d690598-e82c-42f2-9e4b-a8892c33c0b0 req-961a1f5e-7deb-4a97-ab17-fa2f22ef53eb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.467 189489 DEBUG nova.compute.manager [req-6d690598-e82c-42f2-9e4b-a8892c33c0b0 req-961a1f5e-7deb-4a97-ab17-fa2f22ef53eb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] No waiting events found dispatching network-vif-unplugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.467 189489 DEBUG nova.compute.manager [req-6d690598-e82c-42f2-9e4b-a8892c33c0b0 req-961a1f5e-7deb-4a97-ab17-fa2f22ef53eb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-vif-unplugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.525 189489 INFO nova.compute.manager [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.525 189489 DEBUG oslo.service.loopingcall [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.526 189489 DEBUG nova.compute.manager [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.526 189489 DEBUG nova.network.neutron [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:44:56 compute-0 rsyslogd[236931]: message too long (8192) with configured size 8096, begin of message is: 2025-11-29 15:44:56.449 189489 DEBUG nova.virt.libvirt.vif [None req-8b15303b-15 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.954 189489 DEBUG nova.network.neutron [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updated VIF entry in instance network info cache for port 990859f2-5f64-4a2a-9f1d-694b0d52b155. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:44:56 compute-0 nova_compute[189485]: 2025-11-29 15:44:56.954 189489 DEBUG nova.network.neutron [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [{"id": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "address": "fa:16:3e:96:c1:c2", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.225", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap990859f2-5f", "ovs_interfaceid": "990859f2-5f64-4a2a-9f1d-694b0d52b155", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:44:57 compute-0 nova_compute[189485]: 2025-11-29 15:44:57.006 189489 DEBUG oslo_concurrency.lockutils [req-0d6d175e-e3fc-46cf-9320-e7b769e9808d req-420ba585-7839-490f-978c-d20f34c9b7e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-dd0fdf5e-41d6-4c60-a546-112da1f37416" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:44:58 compute-0 nova_compute[189485]: 2025-11-29 15:44:58.659 189489 DEBUG nova.compute.manager [req-b882920f-d698-45d4-8f2a-99eea041a6ef req-5a00fb06-d5e1-4a80-a640-eb3edaabbec6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:44:58 compute-0 nova_compute[189485]: 2025-11-29 15:44:58.660 189489 DEBUG oslo_concurrency.lockutils [req-b882920f-d698-45d4-8f2a-99eea041a6ef req-5a00fb06-d5e1-4a80-a640-eb3edaabbec6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:58 compute-0 nova_compute[189485]: 2025-11-29 15:44:58.660 189489 DEBUG oslo_concurrency.lockutils [req-b882920f-d698-45d4-8f2a-99eea041a6ef req-5a00fb06-d5e1-4a80-a640-eb3edaabbec6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:58 compute-0 nova_compute[189485]: 2025-11-29 15:44:58.660 189489 DEBUG oslo_concurrency.lockutils [req-b882920f-d698-45d4-8f2a-99eea041a6ef req-5a00fb06-d5e1-4a80-a640-eb3edaabbec6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:58 compute-0 nova_compute[189485]: 2025-11-29 15:44:58.660 189489 DEBUG nova.compute.manager [req-b882920f-d698-45d4-8f2a-99eea041a6ef req-5a00fb06-d5e1-4a80-a640-eb3edaabbec6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] No waiting events found dispatching network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:44:58 compute-0 nova_compute[189485]: 2025-11-29 15:44:58.660 189489 WARNING nova.compute.manager [req-b882920f-d698-45d4-8f2a-99eea041a6ef req-5a00fb06-d5e1-4a80-a640-eb3edaabbec6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Received unexpected event network-vif-plugged-990859f2-5f64-4a2a-9f1d-694b0d52b155 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.168 189489 DEBUG nova.network.neutron [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:44:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:59.192 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.192 189489 INFO nova.compute.manager [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Took 2.67 seconds to deallocate network for instance.#033[00m
Nov 29 15:44:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:59.192 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:44:59.193 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.248 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.249 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.371 189489 DEBUG nova.compute.provider_tree [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.391 189489 DEBUG nova.scheduler.client.report [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.409 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.446 189489 INFO nova.scheduler.client.report [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Deleted allocations for instance dd0fdf5e-41d6-4c60-a546-112da1f37416#033[00m
Nov 29 15:44:59 compute-0 nova_compute[189485]: 2025-11-29 15:44:59.539 189489 DEBUG oslo_concurrency.lockutils [None req-8b15303b-1592-4467-b966-aaf838b78ad6 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "dd0fdf5e-41d6-4c60-a546-112da1f37416" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.401s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:44:59 compute-0 podman[203677]: time="2025-11-29T15:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:44:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:44:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4791 "" "Go-http-client/1.1"
Nov 29 15:45:00 compute-0 nova_compute[189485]: 2025-11-29 15:45:00.381 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: ERROR   15:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: ERROR   15:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: ERROR   15:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: ERROR   15:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:45:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:45:01 compute-0 nova_compute[189485]: 2025-11-29 15:45:01.454 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:05 compute-0 nova_compute[189485]: 2025-11-29 15:45:05.383 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:06 compute-0 nova_compute[189485]: 2025-11-29 15:45:06.456 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:09 compute-0 podman[248846]: 2025-11-29 15:45:09.676503818 +0000 UTC m=+0.111755953 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:45:10 compute-0 nova_compute[189485]: 2025-11-29 15:45:10.385 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:10 compute-0 nova_compute[189485]: 2025-11-29 15:45:10.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.432 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431096.4301426, dd0fdf5e-41d6-4c60-a546-112da1f37416 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.432 189489 INFO nova.compute.manager [-] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.456 189489 DEBUG nova.compute.manager [None req-4fd85181-650f-4931-98fc-23963335d75d - - - - - -] [instance: dd0fdf5e-41d6-4c60-a546-112da1f37416] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.459 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:45:11 compute-0 nova_compute[189485]: 2025-11-29 15:45:11.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:45:12 compute-0 nova_compute[189485]: 2025-11-29 15:45:12.166 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:45:12 compute-0 nova_compute[189485]: 2025-11-29 15:45:12.167 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:45:12 compute-0 nova_compute[189485]: 2025-11-29 15:45:12.168 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:45:12 compute-0 nova_compute[189485]: 2025-11-29 15:45:12.169 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:45:13 compute-0 nova_compute[189485]: 2025-11-29 15:45:13.380 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [{"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:45:13 compute-0 nova_compute[189485]: 2025-11-29 15:45:13.399 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-b5d60fb8-b63e-4b0a-b908-00453be8ce37" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:45:13 compute-0 nova_compute[189485]: 2025-11-29 15:45:13.400 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.365 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.366 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.366 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.366 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.367 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.368 189489 INFO nova.compute.manager [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Terminating instance#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.368 189489 DEBUG nova.compute.manager [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:45:14 compute-0 kernel: tap71c1eec4-61 (unregistering): left promiscuous mode
Nov 29 15:45:14 compute-0 NetworkManager[56360]: <info>  [1764431114.4199] device (tap71c1eec4-61): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:45:14 compute-0 ovn_controller[97827]: 2025-11-29T15:45:14Z|00061|binding|INFO|Releasing lport 71c1eec4-610d-4d07-b3d3-b94428ea07fc from this chassis (sb_readonly=0)
Nov 29 15:45:14 compute-0 ovn_controller[97827]: 2025-11-29T15:45:14Z|00062|binding|INFO|Setting lport 71c1eec4-610d-4d07-b3d3-b94428ea07fc down in Southbound
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.422 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.426 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 ovn_controller[97827]: 2025-11-29T15:45:14Z|00063|binding|INFO|Removing iface tap71c1eec4-61 ovn-installed in OVS
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.429 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.461 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Nov 29 15:45:14 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 19.738s CPU time.
Nov 29 15:45:14 compute-0 systemd-machined[155802]: Machine qemu-1-instance-00000001 terminated.
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.505 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:da:91:00 192.168.0.142'], port_security=['fa:16:3e:da:91:00 192.168.0.142'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.142/24', 'neutron:device_id': 'b5d60fb8-b63e-4b0a-b908-00453be8ce37', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-fa63adc8-00c5-408f-a9a0-653db4d11058', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '04d676205d9142d19f3d4ce7389f72a2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ab1ce576-0f3a-4a3e-abf1-69502fd41864', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.215'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=566ecd39-faeb-413e-8894-df94f2ba695a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=71c1eec4-610d-4d07-b3d3-b94428ea07fc) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.507 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 71c1eec4-610d-4d07-b3d3-b94428ea07fc in datapath fa63adc8-00c5-408f-a9a0-653db4d11058 unbound from our chassis#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.508 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network fa63adc8-00c5-408f-a9a0-653db4d11058, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.510 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[0e52e036-661c-41e2-a265-e5123d4b80c5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.510 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058 namespace which is not needed anymore#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.663 189489 INFO nova.virt.libvirt.driver [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Instance destroyed successfully.#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.663 189489 DEBUG nova.objects.instance [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lazy-loading 'resources' on Instance uuid b5d60fb8-b63e-4b0a-b908-00453be8ce37 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:45:14 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [NOTICE]   (239973) : haproxy version is 2.8.14-c23fe91
Nov 29 15:45:14 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [NOTICE]   (239973) : path to executable is /usr/sbin/haproxy
Nov 29 15:45:14 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [WARNING]  (239973) : Exiting Master process...
Nov 29 15:45:14 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [WARNING]  (239973) : Exiting Master process...
Nov 29 15:45:14 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [ALERT]    (239973) : Current worker (239975) exited with code 143 (Terminated)
Nov 29 15:45:14 compute-0 neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058[239969]: [WARNING]  (239973) : All workers exited. Exiting... (0)
Nov 29 15:45:14 compute-0 systemd[1]: libpod-fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37.scope: Deactivated successfully.
Nov 29 15:45:14 compute-0 podman[248904]: 2025-11-29 15:45:14.712432944 +0000 UTC m=+0.081138285 container died fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.724 189489 DEBUG nova.virt.libvirt.vif [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:26:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:26:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='04d676205d9142d19f3d4ce7389f72a2',ramdisk_id='',reservation_id='r-ym8olkg3',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='a4b79580-904f-4527-8cf1-3888cf1ff785',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:26:19Z,user_data=None,user_id='5cbf094e2197487fbe16a0fe6e3076ba',uuid=b5d60fb8-b63e-4b0a-b908-00453be8ce37,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.724 189489 DEBUG nova.network.os_vif_util [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converting VIF {"id": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "address": "fa:16:3e:da:91:00", "network": {"id": "fa63adc8-00c5-408f-a9a0-653db4d11058", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.142", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.215", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "04d676205d9142d19f3d4ce7389f72a2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap71c1eec4-61", "ovs_interfaceid": "71c1eec4-610d-4d07-b3d3-b94428ea07fc", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.725 189489 DEBUG nova.network.os_vif_util [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.726 189489 DEBUG os_vif [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.728 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.729 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap71c1eec4-61, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.731 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.733 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.736 189489 INFO os_vif [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:da:91:00,bridge_name='br-int',has_traffic_filtering=True,id=71c1eec4-610d-4d07-b3d3-b94428ea07fc,network=Network(fa63adc8-00c5-408f-a9a0-653db4d11058),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap71c1eec4-61')#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.738 189489 INFO nova.virt.libvirt.driver [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Deleting instance files /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37_del#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.739 189489 INFO nova.virt.libvirt.driver [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Deletion of /var/lib/nova/instances/b5d60fb8-b63e-4b0a-b908-00453be8ce37_del complete#033[00m
Nov 29 15:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37-userdata-shm.mount: Deactivated successfully.
Nov 29 15:45:14 compute-0 systemd[1]: var-lib-containers-storage-overlay-49f375938944383c0096ed8219c0486165ec32a17e7708e1f5528067da92808b-merged.mount: Deactivated successfully.
Nov 29 15:45:14 compute-0 podman[248904]: 2025-11-29 15:45:14.765509 +0000 UTC m=+0.134214301 container cleanup fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:45:14 compute-0 systemd[1]: libpod-conmon-fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37.scope: Deactivated successfully.
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.830 189489 INFO nova.compute.manager [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.830 189489 DEBUG oslo.service.loopingcall [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.831 189489 DEBUG nova.compute.manager [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.831 189489 DEBUG nova.network.neutron [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:45:14 compute-0 podman[248942]: 2025-11-29 15:45:14.844560018 +0000 UTC m=+0.053965010 container remove fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.858 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[08a4ae63-a34b-4d54-903a-8fcaab030f98]: (4, ('Sat Nov 29 03:45:14 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058 (fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37)\nfc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37\nSat Nov 29 03:45:14 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058 (fc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37)\nfc438b559cff40fc1e6d3f02ae1be5993bb588087d6cb1ab77d92f3596c93c37\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.861 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ca9fc0af-352b-44ce-81b4-9d765f0b48d8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.863 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfa63adc8-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.864 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 kernel: tapfa63adc8-00: left promiscuous mode
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.883 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.884 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.887 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[07a1065d-6483-4889-8a7b-3d0aa7003bfb]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.903 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[4e70dc73-185d-446f-baee-de914cd69cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.905 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[af95def2-6fbc-4cda-a8f6-99e87b85cbfe]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.927 189489 DEBUG nova.compute.manager [req-8242609c-1ffb-44d1-9685-69490da3d93c req-24b37a65-f1a5-4618-be28-3dabdca1243c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-vif-unplugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.927 189489 DEBUG oslo_concurrency.lockutils [req-8242609c-1ffb-44d1-9685-69490da3d93c req-24b37a65-f1a5-4618-be28-3dabdca1243c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.928 189489 DEBUG oslo_concurrency.lockutils [req-8242609c-1ffb-44d1-9685-69490da3d93c req-24b37a65-f1a5-4618-be28-3dabdca1243c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.928 189489 DEBUG oslo_concurrency.lockutils [req-8242609c-1ffb-44d1-9685-69490da3d93c req-24b37a65-f1a5-4618-be28-3dabdca1243c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.928 189489 DEBUG nova.compute.manager [req-8242609c-1ffb-44d1-9685-69490da3d93c req-24b37a65-f1a5-4618-be28-3dabdca1243c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] No waiting events found dispatching network-vif-unplugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:45:14 compute-0 nova_compute[189485]: 2025-11-29 15:45:14.929 189489 DEBUG nova.compute.manager [req-8242609c-1ffb-44d1-9685-69490da3d93c req-24b37a65-f1a5-4618-be28-3dabdca1243c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-vif-unplugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.929 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd3279f-44f7-48c3-8a9d-6c627cb4f4eb]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 373709, 'reachable_time': 43141, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248956, 'error': None, 'target': 'ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:14 compute-0 systemd[1]: run-netns-ovnmeta\x2dfa63adc8\x2d00c5\x2d408f\x2da9a0\x2d653db4d11058.mount: Deactivated successfully.
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.958 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-fa63adc8-00c5-408f-a9a0-653db4d11058 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:45:14 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:14.959 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[a326dc25-b1e3-47b1-8ea0-e43b828e1a2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.387 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.715 189489 DEBUG nova.network.neutron [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.743 189489 INFO nova.compute.manager [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Took 0.91 seconds to deallocate network for instance.#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.782 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.783 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.864 189489 DEBUG nova.compute.provider_tree [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.878 189489 DEBUG nova.scheduler.client.report [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.898 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.936 189489 INFO nova.scheduler.client.report [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Deleted allocations for instance b5d60fb8-b63e-4b0a-b908-00453be8ce37#033[00m
Nov 29 15:45:15 compute-0 nova_compute[189485]: 2025-11-29 15:45:15.998 189489 DEBUG oslo_concurrency.lockutils [None req-cd3850f9-9b6d-4644-b6b8-dfdc3e3ced91 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:16 compute-0 nova_compute[189485]: 2025-11-29 15:45:16.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:16 compute-0 nova_compute[189485]: 2025-11-29 15:45:16.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.046 189489 DEBUG nova.compute.manager [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.047 189489 DEBUG oslo_concurrency.lockutils [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.048 189489 DEBUG oslo_concurrency.lockutils [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.048 189489 DEBUG oslo_concurrency.lockutils [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "b5d60fb8-b63e-4b0a-b908-00453be8ce37-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.049 189489 DEBUG nova.compute.manager [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] No waiting events found dispatching network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.050 189489 WARNING nova.compute.manager [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received unexpected event network-vif-plugged-71c1eec4-610d-4d07-b3d3-b94428ea07fc for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.050 189489 DEBUG nova.compute.manager [req-1420f73a-044b-4edf-94fc-1a2623dff0aa req-de17b748-e377-4d51-b671-08ac319018d6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Received event network-vif-deleted-71c1eec4-610d-4d07-b3d3-b94428ea07fc external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.559 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.559 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.560 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:17 compute-0 nova_compute[189485]: 2025-11-29 15:45:17.560 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:45:17 compute-0 podman[248958]: 2025-11-29 15:45:17.694122977 +0000 UTC m=+0.138598938 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true)
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.011 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.013 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5391MB free_disk=72.37865447998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.013 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.014 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.371 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.372 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.401 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.473 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.822 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:45:18 compute-0 nova_compute[189485]: 2025-11-29 15:45:18.823 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.809s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:19 compute-0 podman[248981]: 2025-11-29 15:45:19.651161688 +0000 UTC m=+0.086615501 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 15:45:19 compute-0 podman[248979]: 2025-11-29 15:45:19.668882951 +0000 UTC m=+0.103773799 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, config_id=edpm, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 29 15:45:19 compute-0 podman[248980]: 2025-11-29 15:45:19.670569006 +0000 UTC m=+0.112537192 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 15:45:19 compute-0 podman[248989]: 2025-11-29 15:45:19.677384878 +0000 UTC m=+0.094484512 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 29 15:45:19 compute-0 podman[248986]: 2025-11-29 15:45:19.692482631 +0000 UTC m=+0.122583631 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 15:45:19 compute-0 nova_compute[189485]: 2025-11-29 15:45:19.732 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:20 compute-0 nova_compute[189485]: 2025-11-29 15:45:20.389 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:20 compute-0 nova_compute[189485]: 2025-11-29 15:45:20.825 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:21 compute-0 nova_compute[189485]: 2025-11-29 15:45:21.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:21 compute-0 nova_compute[189485]: 2025-11-29 15:45:21.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:45:23 compute-0 podman[249079]: 2025-11-29 15:45:23.702969545 +0000 UTC m=+0.144933937 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:45:24 compute-0 nova_compute[189485]: 2025-11-29 15:45:24.737 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:25 compute-0 nova_compute[189485]: 2025-11-29 15:45:25.393 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:26 compute-0 podman[249098]: 2025-11-29 15:45:26.613416517 +0000 UTC m=+0.070239096 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:45:27 compute-0 nova_compute[189485]: 2025-11-29 15:45:27.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:45:29 compute-0 nova_compute[189485]: 2025-11-29 15:45:29.660 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431114.6578672, b5d60fb8-b63e-4b0a-b908-00453be8ce37 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:45:29 compute-0 nova_compute[189485]: 2025-11-29 15:45:29.661 189489 INFO nova.compute.manager [-] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:45:29 compute-0 nova_compute[189485]: 2025-11-29 15:45:29.695 189489 DEBUG nova.compute.manager [None req-18fac790-b296-4e64-9be8-ca2602298714 - - - - - -] [instance: b5d60fb8-b63e-4b0a-b908-00453be8ce37] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:45:29 compute-0 nova_compute[189485]: 2025-11-29 15:45:29.741 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:29 compute-0 podman[203677]: time="2025-11-29T15:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:45:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:45:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4325 "" "Go-http-client/1.1"
Nov 29 15:45:30 compute-0 nova_compute[189485]: 2025-11-29 15:45:30.396 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: ERROR   15:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: ERROR   15:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: ERROR   15:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: ERROR   15:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:45:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:45:34 compute-0 nova_compute[189485]: 2025-11-29 15:45:34.743 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:35 compute-0 nova_compute[189485]: 2025-11-29 15:45:35.397 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:39 compute-0 nova_compute[189485]: 2025-11-29 15:45:39.748 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:40 compute-0 nova_compute[189485]: 2025-11-29 15:45:40.400 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:40 compute-0 podman[249122]: 2025-11-29 15:45:40.639079081 +0000 UTC m=+0.094919325 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:45:44 compute-0 nova_compute[189485]: 2025-11-29 15:45:44.753 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:45 compute-0 nova_compute[189485]: 2025-11-29 15:45:45.404 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:45 compute-0 ovn_controller[97827]: 2025-11-29T15:45:45Z|00064|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Nov 29 15:45:48 compute-0 podman[249145]: 2025-11-29 15:45:48.628037165 +0000 UTC m=+0.083484730 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:45:49 compute-0 nova_compute[189485]: 2025-11-29 15:45:49.757 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:50 compute-0 nova_compute[189485]: 2025-11-29 15:45:50.414 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:50 compute-0 podman[249166]: 2025-11-29 15:45:50.645716603 +0000 UTC m=+0.098052896 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, config_id=edpm, io.openshift.tags=base rhel9, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:45:50 compute-0 podman[249175]: 2025-11-29 15:45:50.648858777 +0000 UTC m=+0.086931470 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, release=1755695350, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 29 15:45:50 compute-0 podman[249168]: 2025-11-29 15:45:50.679456673 +0000 UTC m=+0.111572337 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 15:45:50 compute-0 podman[249167]: 2025-11-29 15:45:50.681779035 +0000 UTC m=+0.117409443 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.build-date=20251125)
Nov 29 15:45:50 compute-0 podman[249169]: 2025-11-29 15:45:50.6924558 +0000 UTC m=+0.133954865 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:45:54 compute-0 podman[249268]: 2025-11-29 15:45:54.669653597 +0000 UTC m=+0.120051774 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:45:54 compute-0 nova_compute[189485]: 2025-11-29 15:45:54.763 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:55 compute-0 nova_compute[189485]: 2025-11-29 15:45:55.417 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:45:57 compute-0 podman[249286]: 2025-11-29 15:45:57.625647045 +0000 UTC m=+0.077477397 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:45:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:59.193 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:45:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:59.194 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:45:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:45:59.194 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:45:59 compute-0 podman[203677]: time="2025-11-29T15:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:45:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:45:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4327 "" "Go-http-client/1.1"
Nov 29 15:45:59 compute-0 nova_compute[189485]: 2025-11-29 15:45:59.787 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:00 compute-0 nova_compute[189485]: 2025-11-29 15:46:00.419 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.056 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.057 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.057 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.058 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.058 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.059 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.060 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.061 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.062 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.075 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:46:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: ERROR   15:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: ERROR   15:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: ERROR   15:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: ERROR   15:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:46:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:46:04 compute-0 nova_compute[189485]: 2025-11-29 15:46:04.792 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:05 compute-0 nova_compute[189485]: 2025-11-29 15:46:05.422 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:09 compute-0 nova_compute[189485]: 2025-11-29 15:46:09.797 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:10 compute-0 nova_compute[189485]: 2025-11-29 15:46:10.425 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:10 compute-0 nova_compute[189485]: 2025-11-29 15:46:10.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:11 compute-0 podman[249314]: 2025-11-29 15:46:11.668238419 +0000 UTC m=+0.102025953 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:46:12 compute-0 nova_compute[189485]: 2025-11-29 15:46:12.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:12 compute-0 nova_compute[189485]: 2025-11-29 15:46:12.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:46:12 compute-0 nova_compute[189485]: 2025-11-29 15:46:12.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:46:12 compute-0 nova_compute[189485]: 2025-11-29 15:46:12.517 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:46:14 compute-0 nova_compute[189485]: 2025-11-29 15:46:14.804 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:15 compute-0 nova_compute[189485]: 2025-11-29 15:46:15.428 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:16 compute-0 nova_compute[189485]: 2025-11-29 15:46:16.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:17 compute-0 nova_compute[189485]: 2025-11-29 15:46:17.478 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:17 compute-0 nova_compute[189485]: 2025-11-29 15:46:17.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:17 compute-0 nova_compute[189485]: 2025-11-29 15:46:17.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:19 compute-0 nova_compute[189485]: 2025-11-29 15:46:19.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:19 compute-0 nova_compute[189485]: 2025-11-29 15:46:19.536 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:46:19 compute-0 nova_compute[189485]: 2025-11-29 15:46:19.537 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:46:19 compute-0 nova_compute[189485]: 2025-11-29 15:46:19.538 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:46:19 compute-0 nova_compute[189485]: 2025-11-29 15:46:19.538 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:46:19 compute-0 podman[249338]: 2025-11-29 15:46:19.70130944 +0000 UTC m=+0.142200344 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 15:46:19 compute-0 nova_compute[189485]: 2025-11-29 15:46:19.809 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.006 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.007 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5394MB free_disk=72.37865447998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.008 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.008 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.103 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.103 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.142 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.161 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.164 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.165 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.156s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:46:20 compute-0 nova_compute[189485]: 2025-11-29 15:46:20.430 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:21 compute-0 nova_compute[189485]: 2025-11-29 15:46:21.165 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:21 compute-0 podman[249360]: 2025-11-29 15:46:21.686319017 +0000 UTC m=+0.118296006 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 15:46:21 compute-0 podman[249359]: 2025-11-29 15:46:21.689437331 +0000 UTC m=+0.123458895 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 15:46:21 compute-0 podman[249358]: 2025-11-29 15:46:21.705443167 +0000 UTC m=+0.153865665 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler)
Nov 29 15:46:21 compute-0 podman[249361]: 2025-11-29 15:46:21.718803594 +0000 UTC m=+0.147485356 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 15:46:21 compute-0 podman[249364]: 2025-11-29 15:46:21.723642123 +0000 UTC m=+0.143082328 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, distribution-scope=public, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 29 15:46:22 compute-0 nova_compute[189485]: 2025-11-29 15:46:22.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:46:22 compute-0 nova_compute[189485]: 2025-11-29 15:46:22.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:46:24 compute-0 nova_compute[189485]: 2025-11-29 15:46:24.816 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:25 compute-0 nova_compute[189485]: 2025-11-29 15:46:25.438 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:25 compute-0 podman[249454]: 2025-11-29 15:46:25.705483512 +0000 UTC m=+0.144634629 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 15:46:28 compute-0 podman[249474]: 2025-11-29 15:46:28.615092932 +0000 UTC m=+0.068961230 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:46:29 compute-0 podman[203677]: time="2025-11-29T15:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:46:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:46:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 29 15:46:29 compute-0 nova_compute[189485]: 2025-11-29 15:46:29.822 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:30 compute-0 nova_compute[189485]: 2025-11-29 15:46:30.441 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: ERROR   15:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: ERROR   15:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: ERROR   15:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: ERROR   15:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:46:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:46:34 compute-0 nova_compute[189485]: 2025-11-29 15:46:34.825 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:35 compute-0 nova_compute[189485]: 2025-11-29 15:46:35.444 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:39 compute-0 nova_compute[189485]: 2025-11-29 15:46:39.830 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:40 compute-0 nova_compute[189485]: 2025-11-29 15:46:40.448 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:42 compute-0 podman[249497]: 2025-11-29 15:46:42.656358113 +0000 UTC m=+0.094336507 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:46:44 compute-0 nova_compute[189485]: 2025-11-29 15:46:44.834 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:45 compute-0 nova_compute[189485]: 2025-11-29 15:46:45.452 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:49 compute-0 nova_compute[189485]: 2025-11-29 15:46:49.839 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:50 compute-0 nova_compute[189485]: 2025-11-29 15:46:50.455 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:50 compute-0 podman[249520]: 2025-11-29 15:46:50.692477275 +0000 UTC m=+0.130854692 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 15:46:52 compute-0 podman[249539]: 2025-11-29 15:46:52.669108118 +0000 UTC m=+0.103321197 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, build-date=2024-09-18T21:23:30, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Nov 29 15:46:52 compute-0 podman[249540]: 2025-11-29 15:46:52.674601705 +0000 UTC m=+0.117474805 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 15:46:52 compute-0 podman[249549]: 2025-11-29 15:46:52.696077207 +0000 UTC m=+0.104860958 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vcs-type=git, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_id=edpm)
Nov 29 15:46:52 compute-0 podman[249541]: 2025-11-29 15:46:52.709203567 +0000 UTC m=+0.136382008 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:46:52 compute-0 podman[249542]: 2025-11-29 15:46:52.75354395 +0000 UTC m=+0.183664350 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 15:46:54 compute-0 nova_compute[189485]: 2025-11-29 15:46:54.843 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:55 compute-0 nova_compute[189485]: 2025-11-29 15:46:55.458 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:46:56 compute-0 podman[249635]: 2025-11-29 15:46:56.689064974 +0000 UTC m=+0.115310637 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:46:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:46:59.195 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:46:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:46:59.196 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:46:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:46:59.196 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:46:59 compute-0 podman[249654]: 2025-11-29 15:46:59.635302731 +0000 UTC m=+0.082555343 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:46:59 compute-0 podman[203677]: time="2025-11-29T15:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:46:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:46:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4329 "" "Go-http-client/1.1"
Nov 29 15:46:59 compute-0 nova_compute[189485]: 2025-11-29 15:46:59.847 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:00 compute-0 nova_compute[189485]: 2025-11-29 15:47:00.460 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: ERROR   15:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: ERROR   15:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: ERROR   15:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: ERROR   15:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:47:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:47:03 compute-0 nova_compute[189485]: 2025-11-29 15:47:03.245 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:04 compute-0 nova_compute[189485]: 2025-11-29 15:47:04.851 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:05 compute-0 nova_compute[189485]: 2025-11-29 15:47:05.463 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:06 compute-0 nova_compute[189485]: 2025-11-29 15:47:06.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:06 compute-0 nova_compute[189485]: 2025-11-29 15:47:06.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:47:06 compute-0 nova_compute[189485]: 2025-11-29 15:47:06.514 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:47:09 compute-0 nova_compute[189485]: 2025-11-29 15:47:09.854 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:10 compute-0 nova_compute[189485]: 2025-11-29 15:47:10.467 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:11 compute-0 nova_compute[189485]: 2025-11-29 15:47:11.514 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:13 compute-0 nova_compute[189485]: 2025-11-29 15:47:13.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:13 compute-0 nova_compute[189485]: 2025-11-29 15:47:13.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:47:13 compute-0 nova_compute[189485]: 2025-11-29 15:47:13.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:47:13 compute-0 nova_compute[189485]: 2025-11-29 15:47:13.505 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:47:13 compute-0 podman[249676]: 2025-11-29 15:47:13.65019625 +0000 UTC m=+0.094685577 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:47:14 compute-0 nova_compute[189485]: 2025-11-29 15:47:14.858 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:15 compute-0 nova_compute[189485]: 2025-11-29 15:47:15.471 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:18 compute-0 nova_compute[189485]: 2025-11-29 15:47:18.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:19 compute-0 nova_compute[189485]: 2025-11-29 15:47:19.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:19 compute-0 nova_compute[189485]: 2025-11-29 15:47:19.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:19 compute-0 nova_compute[189485]: 2025-11-29 15:47:19.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:19 compute-0 nova_compute[189485]: 2025-11-29 15:47:19.861 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:20 compute-0 nova_compute[189485]: 2025-11-29 15:47:20.473 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:20 compute-0 nova_compute[189485]: 2025-11-29 15:47:20.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:20 compute-0 nova_compute[189485]: 2025-11-29 15:47:20.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:47:20 compute-0 nova_compute[189485]: 2025-11-29 15:47:20.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:47:20 compute-0 nova_compute[189485]: 2025-11-29 15:47:20.528 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:47:20 compute-0 nova_compute[189485]: 2025-11-29 15:47:20.529 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.029 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.030 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5388MB free_disk=72.37865447998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.031 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.031 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.384 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.385 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.476 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.603 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.604 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.618 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:47:21 compute-0 podman[249701]: 2025-11-29 15:47:21.643984562 +0000 UTC m=+0.091154511 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.643 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.674 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.693 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.694 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:47:21 compute-0 nova_compute[189485]: 2025-11-29 15:47:21.695 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.664s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:47:22 compute-0 nova_compute[189485]: 2025-11-29 15:47:22.695 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:22 compute-0 nova_compute[189485]: 2025-11-29 15:47:22.696 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:22 compute-0 nova_compute[189485]: 2025-11-29 15:47:22.697 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:47:23 compute-0 podman[249732]: 2025-11-29 15:47:23.65627536 +0000 UTC m=+0.088535531 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 29 15:47:23 compute-0 podman[249724]: 2025-11-29 15:47:23.661027917 +0000 UTC m=+0.103871273 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:47:23 compute-0 podman[249723]: 2025-11-29 15:47:23.669484064 +0000 UTC m=+0.108543718 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 15:47:23 compute-0 podman[249722]: 2025-11-29 15:47:23.683277685 +0000 UTC m=+0.137754263 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, io.buildah.version=1.29.0, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-type=git, io.openshift.expose-services=, build-date=2024-09-18T21:23:30)
Nov 29 15:47:23 compute-0 podman[249731]: 2025-11-29 15:47:23.707193758 +0000 UTC m=+0.142966554 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:47:24 compute-0 nova_compute[189485]: 2025-11-29 15:47:24.865 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:25 compute-0 nova_compute[189485]: 2025-11-29 15:47:25.477 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:27 compute-0 nova_compute[189485]: 2025-11-29 15:47:27.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:27 compute-0 podman[249817]: 2025-11-29 15:47:27.638328763 +0000 UTC m=+0.093542236 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:47:29 compute-0 podman[203677]: time="2025-11-29T15:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:47:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:47:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4327 "" "Go-http-client/1.1"
Nov 29 15:47:29 compute-0 nova_compute[189485]: 2025-11-29 15:47:29.871 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:30 compute-0 nova_compute[189485]: 2025-11-29 15:47:30.480 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:30 compute-0 nova_compute[189485]: 2025-11-29 15:47:30.507 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:30 compute-0 nova_compute[189485]: 2025-11-29 15:47:30.507 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:47:30 compute-0 podman[249836]: 2025-11-29 15:47:30.704083858 +0000 UTC m=+0.139605624 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: ERROR   15:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: ERROR   15:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: ERROR   15:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: ERROR   15:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:47:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:47:32 compute-0 nova_compute[189485]: 2025-11-29 15:47:32.248 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:32 compute-0 nova_compute[189485]: 2025-11-29 15:47:32.509 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:47:34 compute-0 nova_compute[189485]: 2025-11-29 15:47:34.875 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:35 compute-0 nova_compute[189485]: 2025-11-29 15:47:35.484 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:39 compute-0 nova_compute[189485]: 2025-11-29 15:47:39.880 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:40 compute-0 nova_compute[189485]: 2025-11-29 15:47:40.487 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:44 compute-0 podman[249860]: 2025-11-29 15:47:44.655205678 +0000 UTC m=+0.103170274 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:47:44 compute-0 nova_compute[189485]: 2025-11-29 15:47:44.885 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:45 compute-0 nova_compute[189485]: 2025-11-29 15:47:45.491 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:49 compute-0 nova_compute[189485]: 2025-11-29 15:47:49.890 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:50 compute-0 nova_compute[189485]: 2025-11-29 15:47:50.494 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:52 compute-0 podman[249884]: 2025-11-29 15:47:52.700351791 +0000 UTC m=+0.141772782 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:47:54 compute-0 podman[249904]: 2025-11-29 15:47:54.641259671 +0000 UTC m=+0.092840696 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.4, container_name=kepler, name=ubi9, release-0.7.12=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.buildah.version=1.29.0, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 29 15:47:54 compute-0 podman[249906]: 2025-11-29 15:47:54.657529359 +0000 UTC m=+0.106022361 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:47:54 compute-0 podman[249907]: 2025-11-29 15:47:54.66243175 +0000 UTC m=+0.106860063 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 15:47:54 compute-0 podman[249908]: 2025-11-29 15:47:54.666441718 +0000 UTC m=+0.106600416 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, release=1755695350, version=9.6, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Nov 29 15:47:54 compute-0 podman[249905]: 2025-11-29 15:47:54.667883707 +0000 UTC m=+0.117874239 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 15:47:54 compute-0 nova_compute[189485]: 2025-11-29 15:47:54.892 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:55 compute-0 nova_compute[189485]: 2025-11-29 15:47:55.499 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:47:58 compute-0 podman[249998]: 2025-11-29 15:47:58.677998564 +0000 UTC m=+0.115394713 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 15:47:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:47:59.206 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:47:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:47:59.206 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:47:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:47:59.207 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:47:59 compute-0 podman[203677]: time="2025-11-29T15:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:47:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:47:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4318 "" "Go-http-client/1.1"
Nov 29 15:47:59 compute-0 nova_compute[189485]: 2025-11-29 15:47:59.896 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:00 compute-0 nova_compute[189485]: 2025-11-29 15:48:00.500 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.059 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.059 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.059 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.060 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c0b5700>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.068 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.071 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.072 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.073 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:48:01.074 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: ERROR   15:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: ERROR   15:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: ERROR   15:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: ERROR   15:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:48:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:48:01 compute-0 podman[250018]: 2025-11-29 15:48:01.652226857 +0000 UTC m=+0.096345050 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:48:04 compute-0 nova_compute[189485]: 2025-11-29 15:48:04.903 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:05 compute-0 nova_compute[189485]: 2025-11-29 15:48:05.507 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:09 compute-0 nova_compute[189485]: 2025-11-29 15:48:09.907 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:10 compute-0 nova_compute[189485]: 2025-11-29 15:48:10.510 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:13 compute-0 nova_compute[189485]: 2025-11-29 15:48:13.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:13 compute-0 nova_compute[189485]: 2025-11-29 15:48:13.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:48:13 compute-0 nova_compute[189485]: 2025-11-29 15:48:13.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:48:13 compute-0 nova_compute[189485]: 2025-11-29 15:48:13.611 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:48:13 compute-0 nova_compute[189485]: 2025-11-29 15:48:13.612 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:14 compute-0 podman[250041]: 2025-11-29 15:48:14.759933137 +0000 UTC m=+0.057354973 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:48:14 compute-0 nova_compute[189485]: 2025-11-29 15:48:14.910 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:15 compute-0 nova_compute[189485]: 2025-11-29 15:48:15.515 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:19 compute-0 nova_compute[189485]: 2025-11-29 15:48:19.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:19 compute-0 nova_compute[189485]: 2025-11-29 15:48:19.913 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:20 compute-0 nova_compute[189485]: 2025-11-29 15:48:20.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:20 compute-0 nova_compute[189485]: 2025-11-29 15:48:20.520 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:21 compute-0 nova_compute[189485]: 2025-11-29 15:48:21.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:21 compute-0 nova_compute[189485]: 2025-11-29 15:48:21.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.627 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.628 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.629 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.629 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.989 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.991 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5395MB free_disk=72.37865447998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.991 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:48:22 compute-0 nova_compute[189485]: 2025-11-29 15:48:22.992 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:48:23 compute-0 nova_compute[189485]: 2025-11-29 15:48:23.161 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:48:23 compute-0 nova_compute[189485]: 2025-11-29 15:48:23.162 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:48:23 compute-0 nova_compute[189485]: 2025-11-29 15:48:23.189 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:48:23 compute-0 nova_compute[189485]: 2025-11-29 15:48:23.202 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:48:23 compute-0 nova_compute[189485]: 2025-11-29 15:48:23.204 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:48:23 compute-0 nova_compute[189485]: 2025-11-29 15:48:23.204 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.212s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:48:23 compute-0 podman[250065]: 2025-11-29 15:48:23.654783049 +0000 UTC m=+0.101817528 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:48:24 compute-0 nova_compute[189485]: 2025-11-29 15:48:24.918 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:25 compute-0 nova_compute[189485]: 2025-11-29 15:48:25.521 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:25 compute-0 podman[250085]: 2025-11-29 15:48:25.635014115 +0000 UTC m=+0.085038577 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:48:25 compute-0 podman[250088]: 2025-11-29 15:48:25.650380148 +0000 UTC m=+0.089817055 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, config_id=edpm, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, release=1755695350, name=ubi9-minimal, distribution-scope=public, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:48:25 compute-0 podman[250086]: 2025-11-29 15:48:25.66757763 +0000 UTC m=+0.115569437 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:48:25 compute-0 podman[250084]: 2025-11-29 15:48:25.672943744 +0000 UTC m=+0.117350335 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, release=1214.1726694543, release-0.7.12=, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.4, vcs-type=git, io.buildah.version=1.29.0, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Nov 29 15:48:25 compute-0 podman[250087]: 2025-11-29 15:48:25.689687284 +0000 UTC m=+0.125521035 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:48:29 compute-0 podman[250179]: 2025-11-29 15:48:29.664195795 +0000 UTC m=+0.105755424 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:48:29 compute-0 podman[203677]: time="2025-11-29T15:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:48:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:48:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4327 "" "Go-http-client/1.1"
Nov 29 15:48:29 compute-0 nova_compute[189485]: 2025-11-29 15:48:29.923 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:30 compute-0 nova_compute[189485]: 2025-11-29 15:48:30.526 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: ERROR   15:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: ERROR   15:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: ERROR   15:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: ERROR   15:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:48:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:48:32 compute-0 podman[250198]: 2025-11-29 15:48:32.693839088 +0000 UTC m=+0.129592004 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 15:48:34 compute-0 nova_compute[189485]: 2025-11-29 15:48:34.927 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:35 compute-0 nova_compute[189485]: 2025-11-29 15:48:35.530 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:39 compute-0 nova_compute[189485]: 2025-11-29 15:48:39.929 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:40 compute-0 nova_compute[189485]: 2025-11-29 15:48:40.533 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:44 compute-0 nova_compute[189485]: 2025-11-29 15:48:44.933 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:45 compute-0 nova_compute[189485]: 2025-11-29 15:48:45.536 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:45 compute-0 podman[250222]: 2025-11-29 15:48:45.648211708 +0000 UTC m=+0.085876890 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:48:49 compute-0 nova_compute[189485]: 2025-11-29 15:48:49.936 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:50 compute-0 nova_compute[189485]: 2025-11-29 15:48:50.539 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:54 compute-0 podman[250245]: 2025-11-29 15:48:54.645060592 +0000 UTC m=+0.094271065 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 29 15:48:54 compute-0 nova_compute[189485]: 2025-11-29 15:48:54.938 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:55 compute-0 nova_compute[189485]: 2025-11-29 15:48:55.542 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:48:56 compute-0 podman[250263]: 2025-11-29 15:48:56.685138136 +0000 UTC m=+0.121269270 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, version=9.4, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, architecture=x86_64, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, name=ubi9)
Nov 29 15:48:56 compute-0 podman[250264]: 2025-11-29 15:48:56.694408196 +0000 UTC m=+0.121617060 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:48:56 compute-0 podman[250272]: 2025-11-29 15:48:56.695490735 +0000 UTC m=+0.100591135 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6)
Nov 29 15:48:56 compute-0 podman[250265]: 2025-11-29 15:48:56.716219622 +0000 UTC m=+0.138993007 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 15:48:56 compute-0 podman[250266]: 2025-11-29 15:48:56.719014307 +0000 UTC m=+0.148815721 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 15:48:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:48:59.207 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:48:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:48:59.207 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:48:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:48:59.208 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:48:59 compute-0 podman[203677]: time="2025-11-29T15:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:48:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:48:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4327 "" "Go-http-client/1.1"
Nov 29 15:48:59 compute-0 nova_compute[189485]: 2025-11-29 15:48:59.942 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:00 compute-0 nova_compute[189485]: 2025-11-29 15:49:00.545 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:00 compute-0 podman[250360]: 2025-11-29 15:49:00.670044895 +0000 UTC m=+0.106775400 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, container_name=multipathd, tcib_managed=true)
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: ERROR   15:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: ERROR   15:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: ERROR   15:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: ERROR   15:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:49:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:49:03 compute-0 podman[250379]: 2025-11-29 15:49:03.661618985 +0000 UTC m=+0.108485897 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:49:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:49:04.122 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:49:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:49:04.124 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:49:04 compute-0 nova_compute[189485]: 2025-11-29 15:49:04.123 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:49:04.125 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:49:04 compute-0 nova_compute[189485]: 2025-11-29 15:49:04.946 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:05 compute-0 nova_compute[189485]: 2025-11-29 15:49:05.549 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:09 compute-0 nova_compute[189485]: 2025-11-29 15:49:09.951 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:10 compute-0 nova_compute[189485]: 2025-11-29 15:49:10.552 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:14 compute-0 nova_compute[189485]: 2025-11-29 15:49:14.954 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:15 compute-0 nova_compute[189485]: 2025-11-29 15:49:15.205 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:15 compute-0 nova_compute[189485]: 2025-11-29 15:49:15.205 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:49:15 compute-0 nova_compute[189485]: 2025-11-29 15:49:15.205 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:49:15 compute-0 nova_compute[189485]: 2025-11-29 15:49:15.223 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:49:15 compute-0 nova_compute[189485]: 2025-11-29 15:49:15.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:15 compute-0 nova_compute[189485]: 2025-11-29 15:49:15.554 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:16 compute-0 podman[250402]: 2025-11-29 15:49:16.662873682 +0000 UTC m=+0.098796217 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:49:19 compute-0 nova_compute[189485]: 2025-11-29 15:49:19.957 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:20 compute-0 nova_compute[189485]: 2025-11-29 15:49:20.559 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:21 compute-0 nova_compute[189485]: 2025-11-29 15:49:21.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:21 compute-0 nova_compute[189485]: 2025-11-29 15:49:21.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:21 compute-0 nova_compute[189485]: 2025-11-29 15:49:21.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.537 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.538 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.539 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.539 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.961 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.962 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5393MB free_disk=72.37865447998047GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.962 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:49:22 compute-0 nova_compute[189485]: 2025-11-29 15:49:22.963 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:49:23 compute-0 nova_compute[189485]: 2025-11-29 15:49:23.054 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:49:23 compute-0 nova_compute[189485]: 2025-11-29 15:49:23.055 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:49:23 compute-0 nova_compute[189485]: 2025-11-29 15:49:23.089 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:49:23 compute-0 nova_compute[189485]: 2025-11-29 15:49:23.104 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:49:23 compute-0 nova_compute[189485]: 2025-11-29 15:49:23.106 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:49:23 compute-0 nova_compute[189485]: 2025-11-29 15:49:23.107 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:49:24 compute-0 nova_compute[189485]: 2025-11-29 15:49:24.959 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:25 compute-0 nova_compute[189485]: 2025-11-29 15:49:25.108 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:25 compute-0 nova_compute[189485]: 2025-11-29 15:49:25.108 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:25 compute-0 nova_compute[189485]: 2025-11-29 15:49:25.109 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:49:25 compute-0 nova_compute[189485]: 2025-11-29 15:49:25.558 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:25 compute-0 podman[250428]: 2025-11-29 15:49:25.692016334 +0000 UTC m=+0.128832284 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:49:27 compute-0 podman[250448]: 2025-11-29 15:49:27.654225796 +0000 UTC m=+0.098613132 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:49:27 compute-0 podman[250447]: 2025-11-29 15:49:27.678772605 +0000 UTC m=+0.130580370 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, maintainer=Red Hat, Inc., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, architecture=x86_64, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible)
Nov 29 15:49:27 compute-0 podman[250462]: 2025-11-29 15:49:27.683020739 +0000 UTC m=+0.102141146 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9)
Nov 29 15:49:27 compute-0 podman[250454]: 2025-11-29 15:49:27.685682301 +0000 UTC m=+0.111160798 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:49:27 compute-0 podman[250460]: 2025-11-29 15:49:27.716372996 +0000 UTC m=+0.134737812 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:49:29 compute-0 podman[203677]: time="2025-11-29T15:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:49:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:49:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4320 "" "Go-http-client/1.1"
Nov 29 15:49:29 compute-0 nova_compute[189485]: 2025-11-29 15:49:29.963 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:30 compute-0 nova_compute[189485]: 2025-11-29 15:49:30.562 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: ERROR   15:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: ERROR   15:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: ERROR   15:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: ERROR   15:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:49:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:49:31 compute-0 podman[250544]: 2025-11-29 15:49:31.687091465 +0000 UTC m=+0.120918161 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 15:49:34 compute-0 ovn_controller[97827]: 2025-11-29T15:49:34Z|00065|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Nov 29 15:49:34 compute-0 nova_compute[189485]: 2025-11-29 15:49:34.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:49:34 compute-0 podman[250564]: 2025-11-29 15:49:34.632405241 +0000 UTC m=+0.084089731 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:49:34 compute-0 nova_compute[189485]: 2025-11-29 15:49:34.967 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:35 compute-0 nova_compute[189485]: 2025-11-29 15:49:35.565 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:39 compute-0 nova_compute[189485]: 2025-11-29 15:49:39.972 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:40 compute-0 nova_compute[189485]: 2025-11-29 15:49:40.570 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:41 compute-0 nova_compute[189485]: 2025-11-29 15:49:41.731 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:44 compute-0 nova_compute[189485]: 2025-11-29 15:49:44.920 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:44 compute-0 nova_compute[189485]: 2025-11-29 15:49:44.974 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:45 compute-0 nova_compute[189485]: 2025-11-29 15:49:45.571 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:46 compute-0 nova_compute[189485]: 2025-11-29 15:49:46.809 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:47 compute-0 nova_compute[189485]: 2025-11-29 15:49:47.524 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:47 compute-0 podman[250587]: 2025-11-29 15:49:47.670475839 +0000 UTC m=+0.115395154 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:49:49 compute-0 nova_compute[189485]: 2025-11-29 15:49:49.979 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:50 compute-0 nova_compute[189485]: 2025-11-29 15:49:50.573 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:50 compute-0 nova_compute[189485]: 2025-11-29 15:49:50.759 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:51 compute-0 nova_compute[189485]: 2025-11-29 15:49:51.194 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:51 compute-0 nova_compute[189485]: 2025-11-29 15:49:51.390 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:52 compute-0 nova_compute[189485]: 2025-11-29 15:49:52.721 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:54 compute-0 nova_compute[189485]: 2025-11-29 15:49:54.982 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:55 compute-0 nova_compute[189485]: 2025-11-29 15:49:55.578 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:56 compute-0 podman[250612]: 2025-11-29 15:49:56.666687005 +0000 UTC m=+0.104352026 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 15:49:58 compute-0 nova_compute[189485]: 2025-11-29 15:49:58.455 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:58 compute-0 nova_compute[189485]: 2025-11-29 15:49:58.610 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:58 compute-0 podman[250632]: 2025-11-29 15:49:58.685329445 +0000 UTC m=+0.115550108 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:49:58 compute-0 podman[250633]: 2025-11-29 15:49:58.685732065 +0000 UTC m=+0.108689303 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Nov 29 15:49:58 compute-0 podman[250639]: 2025-11-29 15:49:58.693576636 +0000 UTC m=+0.104476389 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, release=1755695350, io.openshift.expose-services=, name=ubi9-minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41)
Nov 29 15:49:58 compute-0 podman[250631]: 2025-11-29 15:49:58.708342443 +0000 UTC m=+0.143367844 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release=1214.1726694543, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., version=9.4)
Nov 29 15:49:58 compute-0 podman[250634]: 2025-11-29 15:49:58.728830704 +0000 UTC m=+0.151406081 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 15:49:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:49:59.207 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:49:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:49:59.208 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:49:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:49:59.208 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:49:59 compute-0 nova_compute[189485]: 2025-11-29 15:49:59.217 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:49:59 compute-0 podman[203677]: time="2025-11-29T15:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:49:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 15:49:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4328 "" "Go-http-client/1.1"
Nov 29 15:49:59 compute-0 nova_compute[189485]: 2025-11-29 15:49:59.986 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.558 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.581 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.733 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.733 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.759 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.863 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.864 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.874 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.875 189489 INFO nova.compute.claims [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:50:00 compute-0 nova_compute[189485]: 2025-11-29 15:50:00.998 189489 DEBUG nova.compute.provider_tree [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.013 189489 DEBUG nova.scheduler.client.report [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.038 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.175s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.039 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.059 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.060 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.060 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.061 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.063 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.063 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.064 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.065 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.065 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.067 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.069 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'disk.device.write.requests': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.082 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.083 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.084 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:50:01.087 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.113 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.113 189489 DEBUG nova.network.neutron [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.164 189489 INFO nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.182 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.291 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.293 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.293 189489 INFO nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Creating image(s)#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.294 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "/var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.294 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "/var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.295 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "/var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.296 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.297 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.350 189489 DEBUG nova.policy [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '90e4f977a2394cadad716cb5d7194e56', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'd35f91af89d64c66961a06f6336a059e', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: ERROR   15:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: ERROR   15:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: ERROR   15:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: ERROR   15:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:50:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.838 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.838 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.862 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.936 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.937 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.945 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:50:01 compute-0 nova_compute[189485]: 2025-11-29 15:50:01.946 189489 INFO nova.compute.claims [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.118 189489 DEBUG nova.compute.provider_tree [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.141 189489 DEBUG nova.scheduler.client.report [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.172 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.236s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.173 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.253 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.254 189489 DEBUG nova.network.neutron [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.284 189489 INFO nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.314 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.620 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.622 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.623 189489 INFO nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Creating image(s)#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.624 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "/var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.624 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "/var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.626 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "/var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.626 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:02 compute-0 podman[250727]: 2025-11-29 15:50:02.69796792 +0000 UTC m=+0.137858247 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 15:50:02 compute-0 nova_compute[189485]: 2025-11-29 15:50:02.753 189489 DEBUG nova.policy [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'fc787028808a4f33ab230e0ce4fff83b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '31e7f8b8153d41ff92532e0affa83e06', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.412 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.436 189489 DEBUG nova.network.neutron [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Successfully created port: b14cc28b-87b6-499b-abf4-437c4c5d74e9 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.492 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.part --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.493 189489 DEBUG nova.virt.images [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] 6a931c3a-089f-4276-ac71-a0da3ffce7c7 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.495 189489 DEBUG nova.privsep.utils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.496 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.part /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.836 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.part /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.converted" returned: 0 in 0.340s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.846 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.940 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1.converted --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.942 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.645s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.967 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.341s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.968 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:03 compute-0 nova_compute[189485]: 2025-11-29 15:50:03.991 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.014 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.095 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.096 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.097 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.109 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.122 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.123 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.160 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.051s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.161 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.201 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.202 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.105s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.202 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.215 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.232 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.260 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.261 189489 DEBUG nova.virt.disk.api [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Checking if we can resize image /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.262 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.281 189489 DEBUG nova.network.neutron [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Successfully created port: 6a066856-f7c0-4504-8a23-f8d966710ea5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:50:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:04.288 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.288 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:04.290 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.304 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.305 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.336 189489 DEBUG nova.network.neutron [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Successfully updated port: b14cc28b-87b6-499b-abf4-437c4c5d74e9 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.339 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk 1073741824" returned: 0 in 0.034s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.339 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.340 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.353 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.354 189489 DEBUG nova.virt.disk.api [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Cannot resize image /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.354 189489 DEBUG nova.objects.instance [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lazy-loading 'migration_context' on Instance uuid 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.363 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.364 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquired lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.364 189489 DEBUG nova.network.neutron [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.373 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.374 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Ensure instance console log exists: /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.374 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.374 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.375 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.394 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.394 189489 DEBUG nova.virt.disk.api [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Checking if we can resize image /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.395 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.451 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.452 189489 DEBUG nova.virt.disk.api [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Cannot resize image /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.452 189489 DEBUG nova.objects.instance [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lazy-loading 'migration_context' on Instance uuid a8fbb028-7553-448d-8ee5-e0b34ade7315 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.503 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.503 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Ensure instance console log exists: /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.504 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.504 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.504 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.552 189489 DEBUG nova.network.neutron [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:50:04 compute-0 nova_compute[189485]: 2025-11-29 15:50:04.989 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:05 compute-0 nova_compute[189485]: 2025-11-29 15:50:05.586 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:05 compute-0 podman[250788]: 2025-11-29 15:50:05.694746199 +0000 UTC m=+0.139112660 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.520 189489 DEBUG nova.network.neutron [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Successfully updated port: 6a066856-f7c0-4504-8a23-f8d966710ea5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.539 189489 DEBUG nova.network.neutron [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Updating instance_info_cache with network_info: [{"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.558 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.559 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.559 189489 DEBUG nova.network.neutron [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.572 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Releasing lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.573 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Instance network_info: |[{"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.579 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Start _get_guest_xml network_info=[{"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.595 189489 WARNING nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.603 189489 DEBUG nova.virt.libvirt.host [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.604 189489 DEBUG nova.virt.libvirt.host [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.609 189489 DEBUG nova.virt.libvirt.host [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.610 189489 DEBUG nova.virt.libvirt.host [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.611 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.611 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.612 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.612 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.613 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.613 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.614 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.614 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.615 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.615 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.616 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.616 189489 DEBUG nova.virt.hardware [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.624 189489 DEBUG nova.virt.libvirt.vif [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:49:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1605699510',display_name='tempest-ServersTestManualDisk-server-1605699510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1605699510',id=6,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCfDfrDOPJYWP2EHBy3CBtFXg7Owmc5VEuPgEukF1W4A69Nclda30Sjqrhsp79oOu3o1Xlha7m2bmDQuLhLOWks+GDUR8c0BtZ+CkGB8jqOwUERhFh1Vmwu+vmkFUjvilw==',key_name='tempest-keypair-421912273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d35f91af89d64c66961a06f6336a059e',ramdisk_id='',reservation_id='r-14a985by',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-997126101',owner_user_name='tempest-ServersTestManualDisk-997126101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90e4f977a2394cadad716cb5d7194e56',uuid=43c7acb1-c172-4f2d-ad8a-9a0bb198e80b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.625 189489 DEBUG nova.network.os_vif_util [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Converting VIF {"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.628 189489 DEBUG nova.network.os_vif_util [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.630 189489 DEBUG nova.objects.instance [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lazy-loading 'pci_devices' on Instance uuid 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.651 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <uuid>43c7acb1-c172-4f2d-ad8a-9a0bb198e80b</uuid>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <name>instance-00000006</name>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:name>tempest-ServersTestManualDisk-server-1605699510</nova:name>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:50:06</nova:creationTime>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:user uuid="90e4f977a2394cadad716cb5d7194e56">tempest-ServersTestManualDisk-997126101-project-member</nova:user>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:project uuid="d35f91af89d64c66961a06f6336a059e">tempest-ServersTestManualDisk-997126101</nova:project>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        <nova:port uuid="b14cc28b-87b6-499b-abf4-437c4c5d74e9">
Nov 29 15:50:06 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <system>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <entry name="serial">43c7acb1-c172-4f2d-ad8a-9a0bb198e80b</entry>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <entry name="uuid">43c7acb1-c172-4f2d-ad8a-9a0bb198e80b</entry>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </system>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <os>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </os>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <features>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </features>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.config"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:a4:6b:f2"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <target dev="tapb14cc28b-87"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/console.log" append="off"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <video>
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </video>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:50:06 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:50:06 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:50:06 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:50:06 compute-0 nova_compute[189485]: </domain>
Nov 29 15:50:06 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.654 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Preparing to wait for external event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.655 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.655 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.656 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.658 189489 DEBUG nova.virt.libvirt.vif [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:49:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1605699510',display_name='tempest-ServersTestManualDisk-server-1605699510',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1605699510',id=6,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCfDfrDOPJYWP2EHBy3CBtFXg7Owmc5VEuPgEukF1W4A69Nclda30Sjqrhsp79oOu3o1Xlha7m2bmDQuLhLOWks+GDUR8c0BtZ+CkGB8jqOwUERhFh1Vmwu+vmkFUjvilw==',key_name='tempest-keypair-421912273',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='d35f91af89d64c66961a06f6336a059e',ramdisk_id='',reservation_id='r-14a985by',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-997126101',owner_user_name='tempest-ServersTestManualDisk-997126101-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:01Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90e4f977a2394cadad716cb5d7194e56',uuid=43c7acb1-c172-4f2d-ad8a-9a0bb198e80b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.658 189489 DEBUG nova.network.os_vif_util [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Converting VIF {"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.660 189489 DEBUG nova.network.os_vif_util [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.661 189489 DEBUG os_vif [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.663 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.664 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.665 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.671 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.673 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb14cc28b-87, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.674 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb14cc28b-87, col_values=(('external_ids', {'iface-id': 'b14cc28b-87b6-499b-abf4-437c4c5d74e9', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:a4:6b:f2', 'vm-uuid': '43c7acb1-c172-4f2d-ad8a-9a0bb198e80b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.678 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.681 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:50:06 compute-0 NetworkManager[56360]: <info>  [1764431406.6812] manager: (tapb14cc28b-87): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/33)
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.693 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.695 189489 INFO os_vif [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87')#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.807 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.807 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.807 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] No VIF found with MAC fa:16:3e:a4:6b:f2, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.808 189489 INFO nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Using config drive#033[00m
Nov 29 15:50:06 compute-0 nova_compute[189485]: 2025-11-29 15:50:06.939 189489 DEBUG nova.network.neutron [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.318 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.318 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.326 189489 INFO nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Creating config drive at /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.config#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.341 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps661tzjf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.373 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.491 189489 DEBUG oslo_concurrency.processutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmps661tzjf" returned: 0 in 0.150s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:07 compute-0 kernel: tapb14cc28b-87: entered promiscuous mode
Nov 29 15:50:07 compute-0 NetworkManager[56360]: <info>  [1764431407.5930] manager: (tapb14cc28b-87): new Tun device (/org/freedesktop/NetworkManager/Devices/34)
Nov 29 15:50:07 compute-0 ovn_controller[97827]: 2025-11-29T15:50:07Z|00066|binding|INFO|Claiming lport b14cc28b-87b6-499b-abf4-437c4c5d74e9 for this chassis.
Nov 29 15:50:07 compute-0 ovn_controller[97827]: 2025-11-29T15:50:07Z|00067|binding|INFO|b14cc28b-87b6-499b-abf4-437c4c5d74e9: Claiming fa:16:3e:a4:6b:f2 10.100.0.13
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.597 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:07 compute-0 ovn_controller[97827]: 2025-11-29T15:50:07Z|00068|binding|INFO|Setting lport b14cc28b-87b6-499b-abf4-437c4c5d74e9 ovn-installed in OVS
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.635 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:07 compute-0 systemd-udevd[250830]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:07 compute-0 systemd-machined[155802]: New machine qemu-6-instance-00000006.
Nov 29 15:50:07 compute-0 NetworkManager[56360]: <info>  [1764431407.6571] device (tapb14cc28b-87): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:50:07 compute-0 NetworkManager[56360]: <info>  [1764431407.6592] device (tapb14cc28b-87): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:50:07 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Nov 29 15:50:07 compute-0 systemd[1]: Starting libvirt proxy daemon...
Nov 29 15:50:07 compute-0 systemd[1]: Started libvirt proxy daemon.
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.996 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431407.995311, 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:07 compute-0 nova_compute[189485]: 2025-11-29 15:50:07.997 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] VM Started (Lifecycle Event)#033[00m
Nov 29 15:50:08 compute-0 ovn_controller[97827]: 2025-11-29T15:50:08Z|00069|binding|INFO|Setting lport b14cc28b-87b6-499b-abf4-437c4c5d74e9 up in Southbound
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.123 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:6b:f2 10.100.0.13'], port_security=['fa:16:3e:a4:6b:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '43c7acb1-c172-4f2d-ad8a-9a0bb198e80b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd35f91af89d64c66961a06f6336a059e', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6e4ac110-4ab3-4d40-9195-92dcc114d1de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c1247f5-290f-4d1e-bac9-b6f672583a0a, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=b14cc28b-87b6-499b-abf4-437c4c5d74e9) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.124 106713 INFO neutron.agent.ovn.metadata.agent [-] Port b14cc28b-87b6-499b-abf4-437c4c5d74e9 in datapath c94a881a-57d6-46f7-892d-0f7cbde5b879 bound to our chassis#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.125 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c94a881a-57d6-46f7-892d-0f7cbde5b879#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.144 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[43604c08-a53c-4844-8bac-c905ee09ae93]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.146 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc94a881a-51 in ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.151 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc94a881a-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.151 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[c64c103c-e15d-4c78-ac1e-d61087a85c1f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.154 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[bedbcb29-8746-496a-8b5c-00faaea5d7a7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.166 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[34ad1898-f2a6-4c9d-96a0-01c0abb7a7f8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.193 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[890f4d8f-c0e3-4aa0-9ee1-96a35817ad4c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.229 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[9e5e03c3-fb42-4369-aa21-1163d6da5b3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.242 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8b7ea554-2101-454c-aadb-0a6a65d64c37]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 NetworkManager[56360]: <info>  [1764431408.2435] manager: (tapc94a881a-50): new Veth device (/org/freedesktop/NetworkManager/Devices/35)
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.279 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[7d0fc1fe-64eb-4971-8d29-c13bc35d344d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.282 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[ac7a89f2-4bd9-4fe9-8f18-3e6f9c3c2897]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 NetworkManager[56360]: <info>  [1764431408.3102] device (tapc94a881a-50): carrier: link connected
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.314 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[0e4ae704-6e4f-40a3-9692-f806be1b461c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.341 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[18a01421-53fb-4fe1-bf28-c0a91bbe8f49]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc94a881a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:59:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516395, 'reachable_time': 35992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250891, 'error': None, 'target': 'ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.362 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[e8d820a6-029a-4ba1-a0f3-0bc24c229f87]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe30:59f3'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516395, 'tstamp': 516395}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250892, 'error': None, 'target': 'ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.385 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[21eefc7d-6078-450d-bdf0-8f5e96ed68d3]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc94a881a-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:30:59:f3'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516395, 'reachable_time': 35992, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250893, 'error': None, 'target': 'ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.427 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[b4f00e84-b305-4f3a-b151-0542835ec302]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.469 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.479 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431407.9954326, 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.479 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.512 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[bdba5d68-905b-4b92-8b79-c58eac77f34b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.513 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc94a881a-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.513 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.514 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc94a881a-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.516 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:08 compute-0 kernel: tapc94a881a-50: entered promiscuous mode
Nov 29 15:50:08 compute-0 NetworkManager[56360]: <info>  [1764431408.5195] manager: (tapc94a881a-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/36)
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.521 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.521 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc94a881a-50, col_values=(('external_ids', {'iface-id': 'f9dd5b59-01b4-49f6-bef7-d18411beaf36'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.524 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:08 compute-0 ovn_controller[97827]: 2025-11-29T15:50:08Z|00070|binding|INFO|Releasing lport f9dd5b59-01b4-49f6-bef7-d18411beaf36 from this chassis (sb_readonly=0)
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.551 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.552 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c94a881a-57d6-46f7-892d-0f7cbde5b879.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c94a881a-57d6-46f7-892d-0f7cbde5b879.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.554 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[80bfd365-b546-4455-a881-880652584404]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.556 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-c94a881a-57d6-46f7-892d-0f7cbde5b879
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/c94a881a-57d6-46f7-892d-0f7cbde5b879.pid.haproxy
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID c94a881a-57d6-46f7-892d-0f7cbde5b879
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:50:08 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:08.558 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'env', 'PROCESS_TAG=haproxy-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c94a881a-57d6-46f7-892d-0f7cbde5b879.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.566 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:08 compute-0 nova_compute[189485]: 2025-11-29 15:50:08.571 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:08 compute-0 podman[250924]: 2025-11-29 15:50:08.990497325 +0000 UTC m=+0.079751775 container create cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:50:09 compute-0 nova_compute[189485]: 2025-11-29 15:50:09.008 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:09 compute-0 podman[250924]: 2025-11-29 15:50:08.949899804 +0000 UTC m=+0.039154294 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:50:09 compute-0 systemd[1]: Started libpod-conmon-cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae.scope.
Nov 29 15:50:09 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:50:09 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b3e756fde4a24e0700f2a70c8d2ffc495d1f4c0e23942eaed65d64deabfd747e/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:50:09 compute-0 podman[250924]: 2025-11-29 15:50:09.127581339 +0000 UTC m=+0.216835839 container init cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:50:09 compute-0 podman[250924]: 2025-11-29 15:50:09.142397617 +0000 UTC m=+0.231652067 container start cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 15:50:09 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [NOTICE]   (250943) : New worker (250945) forked
Nov 29 15:50:09 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [NOTICE]   (250943) : Loading success.
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.031 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.032 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.046 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.047 189489 INFO nova.compute.claims [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.277 189489 DEBUG nova.compute.provider_tree [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.306 189489 DEBUG nova.scheduler.client.report [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.333 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.301s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.335 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.406 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.407 189489 DEBUG nova.network.neutron [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.435 189489 INFO nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.455 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.548 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.549 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.549 189489 INFO nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Creating image(s)#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.550 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "/var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.550 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "/var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.551 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "/var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.562 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.588 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.639 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.640 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.642 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.668 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.743 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.745 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.791 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.792 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.150s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.793 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.812 189489 DEBUG nova.compute.manager [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-changed-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.813 189489 DEBUG nova.compute.manager [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Refreshing instance network info cache due to event network-changed-b14cc28b-87b6-499b-abf4-437c4c5d74e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.814 189489 DEBUG oslo_concurrency.lockutils [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.815 189489 DEBUG oslo_concurrency.lockutils [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.815 189489 DEBUG nova.network.neutron [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Refreshing network info cache for port b14cc28b-87b6-499b-abf4-437c4c5d74e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.849 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.850 189489 DEBUG nova.virt.disk.api [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Checking if we can resize image /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.850 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.909 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.910 189489 DEBUG nova.virt.disk.api [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Cannot resize image /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.911 189489 DEBUG nova.objects.instance [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lazy-loading 'migration_context' on Instance uuid 857c831e-16aa-4908-8b4d-bf6fc64b8b23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.926 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.927 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Ensure instance console log exists: /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.927 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.928 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.929 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:10 compute-0 nova_compute[189485]: 2025-11-29 15:50:10.982 189489 DEBUG nova.policy [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '5ff5a7c4561f4a87aada601e5a4f9332', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '8a2c00b2ea684b44ae64ef5a0dedb9db', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.138 189489 DEBUG nova.network.neutron [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.159 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.160 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Instance network_info: |[{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.163 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Start _get_guest_xml network_info=[{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.172 189489 WARNING nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.178 189489 DEBUG nova.virt.libvirt.host [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.179 189489 DEBUG nova.virt.libvirt.host [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.183 189489 DEBUG nova.virt.libvirt.host [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.184 189489 DEBUG nova.virt.libvirt.host [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.185 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.185 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.185 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.186 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.186 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.186 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.187 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.187 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.187 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.188 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.188 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.188 189489 DEBUG nova.virt.hardware [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.191 189489 DEBUG nova.virt.libvirt.vif [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:50:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1814984799',display_name='tempest-AttachInterfacesUnderV243Test-server-1814984799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1814984799',id=7,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxbIX6UWVvi623b2TPdtqR6dmeGyuJb/iUDGidiNkmGh2BwNaoWLgF60VYMySzUoNR4AOGsxFkCRSgQsaKINM96EWpBogdkfjelUHp1uk3e9r5r0s3ahvYCRtOL9cB4Xw==',key_name='tempest-keypair-64440635',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31e7f8b8153d41ff92532e0affa83e06',ramdisk_id='',reservation_id='r-tz43hznh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1283287519',owner_user_name='tempest-AttachInterfacesUnderV243Test-1283287519-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc787028808a4f33ab230e0ce4fff83b',uuid=a8fbb028-7553-448d-8ee5-e0b34ade7315,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.192 189489 DEBUG nova.network.os_vif_util [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Converting VIF {"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.193 189489 DEBUG nova.network.os_vif_util [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.193 189489 DEBUG nova.objects.instance [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lazy-loading 'pci_devices' on Instance uuid a8fbb028-7553-448d-8ee5-e0b34ade7315 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.209 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <uuid>a8fbb028-7553-448d-8ee5-e0b34ade7315</uuid>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <name>instance-00000007</name>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:name>tempest-AttachInterfacesUnderV243Test-server-1814984799</nova:name>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:50:11</nova:creationTime>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:user uuid="fc787028808a4f33ab230e0ce4fff83b">tempest-AttachInterfacesUnderV243Test-1283287519-project-member</nova:user>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:project uuid="31e7f8b8153d41ff92532e0affa83e06">tempest-AttachInterfacesUnderV243Test-1283287519</nova:project>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        <nova:port uuid="6a066856-f7c0-4504-8a23-f8d966710ea5">
Nov 29 15:50:11 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <system>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <entry name="serial">a8fbb028-7553-448d-8ee5-e0b34ade7315</entry>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <entry name="uuid">a8fbb028-7553-448d-8ee5-e0b34ade7315</entry>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </system>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <os>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </os>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <features>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </features>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.config"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:27:bf:aa"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <target dev="tap6a066856-f7"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/console.log" append="off"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <video>
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </video>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:50:11 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:50:11 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:50:11 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:50:11 compute-0 nova_compute[189485]: </domain>
Nov 29 15:50:11 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.211 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Preparing to wait for external event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.212 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.212 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.213 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.214 189489 DEBUG nova.virt.libvirt.vif [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:50:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1814984799',display_name='tempest-AttachInterfacesUnderV243Test-server-1814984799',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1814984799',id=7,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxbIX6UWVvi623b2TPdtqR6dmeGyuJb/iUDGidiNkmGh2BwNaoWLgF60VYMySzUoNR4AOGsxFkCRSgQsaKINM96EWpBogdkfjelUHp1uk3e9r5r0s3ahvYCRtOL9cB4Xw==',key_name='tempest-keypair-64440635',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='31e7f8b8153d41ff92532e0affa83e06',ramdisk_id='',reservation_id='r-tz43hznh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-1283287519',owner_user_name='tempest-AttachInterfacesUnderV243Test-1283287519-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:02Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc787028808a4f33ab230e0ce4fff83b',uuid=a8fbb028-7553-448d-8ee5-e0b34ade7315,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.215 189489 DEBUG nova.network.os_vif_util [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Converting VIF {"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.216 189489 DEBUG nova.network.os_vif_util [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.216 189489 DEBUG os_vif [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.217 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.218 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.219 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.223 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.223 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6a066856-f7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.224 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6a066856-f7, col_values=(('external_ids', {'iface-id': '6a066856-f7c0-4504-8a23-f8d966710ea5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:27:bf:aa', 'vm-uuid': 'a8fbb028-7553-448d-8ee5-e0b34ade7315'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.226 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:11 compute-0 NetworkManager[56360]: <info>  [1764431411.2272] manager: (tap6a066856-f7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.228 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.233 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.234 189489 INFO os_vif [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7')#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.314 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.315 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.316 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] No VIF found with MAC fa:16:3e:27:bf:aa, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.317 189489 INFO nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Using config drive#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.841 189489 INFO nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Creating config drive at /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.config#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.848 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsf2n9elf execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.988 189489 DEBUG nova.network.neutron [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Successfully created port: edefdb98-b93f-44d4-b001-9327ca3fbfd5 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:50:11 compute-0 nova_compute[189485]: 2025-11-29 15:50:11.994 189489 DEBUG oslo_concurrency.processutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpsf2n9elf" returned: 0 in 0.146s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.034 189489 DEBUG nova.compute.manager [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.035 189489 DEBUG nova.compute.manager [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing instance network info cache due to event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.036 189489 DEBUG oslo_concurrency.lockutils [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.037 189489 DEBUG oslo_concurrency.lockutils [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.037 189489 DEBUG nova.network.neutron [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:12 compute-0 kernel: tap6a066856-f7: entered promiscuous mode
Nov 29 15:50:12 compute-0 NetworkManager[56360]: <info>  [1764431412.0980] manager: (tap6a066856-f7): new Tun device (/org/freedesktop/NetworkManager/Devices/38)
Nov 29 15:50:12 compute-0 ovn_controller[97827]: 2025-11-29T15:50:12Z|00071|binding|INFO|Claiming lport 6a066856-f7c0-4504-8a23-f8d966710ea5 for this chassis.
Nov 29 15:50:12 compute-0 ovn_controller[97827]: 2025-11-29T15:50:12Z|00072|binding|INFO|6a066856-f7c0-4504-8a23-f8d966710ea5: Claiming fa:16:3e:27:bf:aa 10.100.0.9
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.101 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.110 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:bf:aa 10.100.0.9'], port_security=['fa:16:3e:27:bf:aa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a8fbb028-7553-448d-8ee5-e0b34ade7315', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4513a63b-8374-4327-8252-b3341ea0d01b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31e7f8b8153d41ff92532e0affa83e06', 'neutron:revision_number': '2', 'neutron:security_group_ids': '604858cc-9311-4bea-9cbd-ecdfcdc76e2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd3bfea5-211e-4f33-8f36-c788a1fc59d7, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=6a066856-f7c0-4504-8a23-f8d966710ea5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.113 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 6a066856-f7c0-4504-8a23-f8d966710ea5 in datapath 4513a63b-8374-4327-8252-b3341ea0d01b bound to our chassis#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.118 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 4513a63b-8374-4327-8252-b3341ea0d01b#033[00m
Nov 29 15:50:12 compute-0 ovn_controller[97827]: 2025-11-29T15:50:12Z|00073|binding|INFO|Setting lport 6a066856-f7c0-4504-8a23-f8d966710ea5 ovn-installed in OVS
Nov 29 15:50:12 compute-0 ovn_controller[97827]: 2025-11-29T15:50:12Z|00074|binding|INFO|Setting lport 6a066856-f7c0-4504-8a23-f8d966710ea5 up in Southbound
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.133 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.134 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a95f97c7-c65a-4d7a-8395-34d01f2e3cb3]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.135 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap4513a63b-81 in ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.138 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap4513a63b-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.138 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a16e63a1-99c5-422a-b3c2-fe0384ab83f6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.141 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.141 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[31115514-aae9-454a-815e-a43d87ab27d2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.171 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[672c763d-e17b-49e9-b039-ed7996df8d1c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 systemd-machined[155802]: New machine qemu-7-instance-00000007.
Nov 29 15:50:12 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Nov 29 15:50:12 compute-0 systemd-udevd[250992]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.209 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1aa7340c-66da-4c3b-b231-0737e249613c]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 NetworkManager[56360]: <info>  [1764431412.2249] device (tap6a066856-f7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:50:12 compute-0 NetworkManager[56360]: <info>  [1764431412.2317] device (tap6a066856-f7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.267 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[da60e1ec-a67c-480a-8331-877a04281a7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.274 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[110772d5-b05a-4a0a-b398-68ea05014ea7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 systemd-udevd[250995]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:12 compute-0 NetworkManager[56360]: <info>  [1764431412.2780] manager: (tap4513a63b-80): new Veth device (/org/freedesktop/NetworkManager/Devices/39)
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.293 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.325 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[8b565fd7-809c-42a7-96dd-21b19ccaa05f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.329 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[16858b69-0467-422c-bdf0-16080adca38a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 NetworkManager[56360]: <info>  [1764431412.3633] device (tap4513a63b-80): carrier: link connected
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.371 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[75a75fa4-ce82-4a22-9901-e4ca07676d95]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.392 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[2901aa4d-4a72-4cba-a95b-5f70f68e1190]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4513a63b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:26:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516801, 'reachable_time': 24129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251022, 'error': None, 'target': 'ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.417 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8705f6aa-ed9e-4e65-81ea-27c97f1d093d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fea4:261f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 516801, 'tstamp': 516801}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251023, 'error': None, 'target': 'ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.442 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ab8f795f-5ebd-40b1-a3cb-0a5acd740ad5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap4513a63b-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:a4:26:1f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 2, 'rx_bytes': 90, 'tx_bytes': 176, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516801, 'reachable_time': 24129, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 2, 'outoctets': 148, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 2, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 148, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 2, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251024, 'error': None, 'target': 'ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.478 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[493ceafc-f3f8-4c93-bcd3-8e2e1d8160a5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.546 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f6fd4807-8167-45a0-b672-adc29475451e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.549 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4513a63b-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.550 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.551 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap4513a63b-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.554 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 NetworkManager[56360]: <info>  [1764431412.5564] manager: (tap4513a63b-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/40)
Nov 29 15:50:12 compute-0 kernel: tap4513a63b-80: entered promiscuous mode
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.560 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.561 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap4513a63b-80, col_values=(('external_ids', {'iface-id': 'ec3a721a-108a-4ae8-a5bc-85ed17fb9b58'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.564 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 ovn_controller[97827]: 2025-11-29T15:50:12Z|00075|binding|INFO|Releasing lport ec3a721a-108a-4ae8-a5bc-85ed17fb9b58 from this chassis (sb_readonly=0)
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.566 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.567 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/4513a63b-8374-4327-8252-b3341ea0d01b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/4513a63b-8374-4327-8252-b3341ea0d01b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.569 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[db47e81b-a531-4dff-a119-07a2091bfb7d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.570 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-4513a63b-8374-4327-8252-b3341ea0d01b
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/4513a63b-8374-4327-8252-b3341ea0d01b.pid.haproxy
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 4513a63b-8374-4327-8252-b3341ea0d01b
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:50:12 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:12.571 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b', 'env', 'PROCESS_TAG=haproxy-4513a63b-8374-4327-8252-b3341ea0d01b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/4513a63b-8374-4327-8252-b3341ea0d01b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.582 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.723 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431412.7228272, a8fbb028-7553-448d-8ee5-e0b34ade7315 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.741 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] VM Started (Lifecycle Event)#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.770 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.776 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431412.7229583, a8fbb028-7553-448d-8ee5-e0b34ade7315 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.776 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.799 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.804 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.823 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.836 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.836 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.854 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.869 189489 DEBUG nova.network.neutron [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Updated VIF entry in instance network info cache for port b14cc28b-87b6-499b-abf4-437c4c5d74e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.870 189489 DEBUG nova.network.neutron [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Updating instance_info_cache with network_info: [{"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.893 189489 DEBUG oslo_concurrency.lockutils [req-27ca6c67-ab6d-41c9-a449-8dcad69e8420 req-1b76fcab-7b0a-4174-aeee-cb4e4aa7d6db 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.934 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.934 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.942 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.943 189489 INFO nova.compute.claims [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.950 189489 DEBUG nova.network.neutron [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Successfully updated port: edefdb98-b93f-44d4-b001-9327ca3fbfd5 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.980 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.982 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquired lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:12 compute-0 nova_compute[189485]: 2025-11-29 15:50:12.983 189489 DEBUG nova.network.neutron [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:50:13 compute-0 podman[251061]: 2025-11-29 15:50:13.008253116 +0000 UTC m=+0.086446683 container create f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 15:50:13 compute-0 podman[251061]: 2025-11-29 15:50:12.958916131 +0000 UTC m=+0.037109738 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:50:13 compute-0 systemd[1]: Started libpod-conmon-f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67.scope.
Nov 29 15:50:13 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:50:13 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ad0fa9ffa2cac37199275306747be993c6176e1742822ce7d59cb8280c3f946/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:50:13 compute-0 podman[251061]: 2025-11-29 15:50:13.136871374 +0000 UTC m=+0.215064951 container init f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0)
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.141 189489 DEBUG nova.compute.provider_tree [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:13 compute-0 podman[251061]: 2025-11-29 15:50:13.155538385 +0000 UTC m=+0.233731932 container start f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 15:50:13 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [NOTICE]   (251080) : New worker (251082) forked
Nov 29 15:50:13 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [NOTICE]   (251080) : Loading success.
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.342 189489 DEBUG nova.scheduler.client.report [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.379 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.445s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.380 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.423 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.424 189489 DEBUG nova.network.neutron [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.441 189489 INFO nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.457 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.460 189489 DEBUG nova.network.neutron [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.544 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.547 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.548 189489 INFO nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Creating image(s)#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.549 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.550 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.551 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.577 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.674 189489 DEBUG nova.policy [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b595faab5dfa4b4e9aff6a34b1473172', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '79e3732a895b43ce86538671ea9e7670', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.705 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.706 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.707 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.727 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.749 189489 DEBUG nova.network.neutron [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updated VIF entry in instance network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.750 189489 DEBUG nova.network.neutron [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.773 189489 DEBUG oslo_concurrency.lockutils [req-31646d69-9d4b-4ea1-8c0b-f59e942cee6b req-f9c8f673-43c2-4f73-a0cd-34bfc27e7150 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.829 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.829 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.878 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.880 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.172s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.881 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.981 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.982 189489 DEBUG nova.virt.disk.api [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Checking if we can resize image /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:50:13 compute-0 nova_compute[189485]: 2025-11-29 15:50:13.982 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.075 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.081 189489 DEBUG nova.virt.disk.api [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Cannot resize image /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.082 189489 DEBUG nova.objects.instance [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'migration_context' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.097 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.098 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Ensure instance console log exists: /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.098 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.099 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.099 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.584 189489 DEBUG nova.network.neutron [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Updating instance_info_cache with network_info: [{"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.618 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Releasing lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.619 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Instance network_info: |[{"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.624 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Start _get_guest_xml network_info=[{"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.640 189489 WARNING nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.660 189489 DEBUG nova.virt.libvirt.host [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.661 189489 DEBUG nova.virt.libvirt.host [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.666 189489 DEBUG nova.virt.libvirt.host [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.666 189489 DEBUG nova.virt.libvirt.host [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.667 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.668 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.668 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.669 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.669 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.670 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.670 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.670 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.671 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.671 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.672 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.672 189489 DEBUG nova.virt.hardware [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.678 189489 DEBUG nova.virt.libvirt.vif [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:50:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-478947030',display_name='tempest-ServersTestJSON-server-478947030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-478947030',id=8,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbASp+Y2GFYtyctN4zFsXV4Yw34qHyoIxNYEUuBYoa1l4ucr5Hl8EX+a6am74YbwCLD1ae1Nlemi69FMS+F+Ji9q4w40jNt4jsb1ZVxWPnDlWf2tpRKugHBkvU+XKLSrg==',key_name='tempest-keypair-1803496096',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a2c00b2ea684b44ae64ef5a0dedb9db',ramdisk_id='',reservation_id='r-uegnxfgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1871335564',owner_user_name='tempest-ServersTestJSON-1871335564-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff5a7c4561f4a87aada601e5a4f9332',uuid=857c831e-16aa-4908-8b4d-bf6fc64b8b23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.679 189489 DEBUG nova.network.os_vif_util [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Converting VIF {"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.680 189489 DEBUG nova.network.os_vif_util [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.681 189489 DEBUG nova.objects.instance [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lazy-loading 'pci_devices' on Instance uuid 857c831e-16aa-4908-8b4d-bf6fc64b8b23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.697 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <uuid>857c831e-16aa-4908-8b4d-bf6fc64b8b23</uuid>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <name>instance-00000008</name>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:name>tempest-ServersTestJSON-server-478947030</nova:name>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:50:14</nova:creationTime>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:user uuid="5ff5a7c4561f4a87aada601e5a4f9332">tempest-ServersTestJSON-1871335564-project-member</nova:user>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:project uuid="8a2c00b2ea684b44ae64ef5a0dedb9db">tempest-ServersTestJSON-1871335564</nova:project>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        <nova:port uuid="edefdb98-b93f-44d4-b001-9327ca3fbfd5">
Nov 29 15:50:14 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <system>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <entry name="serial">857c831e-16aa-4908-8b4d-bf6fc64b8b23</entry>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <entry name="uuid">857c831e-16aa-4908-8b4d-bf6fc64b8b23</entry>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </system>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <os>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </os>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <features>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </features>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.config"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:dc:b3:bc"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <target dev="tapedefdb98-b9"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/console.log" append="off"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <video>
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </video>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:50:14 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:50:14 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:50:14 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:50:14 compute-0 nova_compute[189485]: </domain>
Nov 29 15:50:14 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.698 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Preparing to wait for external event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.699 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.700 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.701 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.702 189489 DEBUG nova.virt.libvirt.vif [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:50:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-478947030',display_name='tempest-ServersTestJSON-server-478947030',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-478947030',id=8,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbASp+Y2GFYtyctN4zFsXV4Yw34qHyoIxNYEUuBYoa1l4ucr5Hl8EX+a6am74YbwCLD1ae1Nlemi69FMS+F+Ji9q4w40jNt4jsb1ZVxWPnDlWf2tpRKugHBkvU+XKLSrg==',key_name='tempest-keypair-1803496096',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='8a2c00b2ea684b44ae64ef5a0dedb9db',ramdisk_id='',reservation_id='r-uegnxfgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-1871335564',owner_user_name='tempest-ServersTestJSON-1871335564-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:10Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff5a7c4561f4a87aada601e5a4f9332',uuid=857c831e-16aa-4908-8b4d-bf6fc64b8b23,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.703 189489 DEBUG nova.network.os_vif_util [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Converting VIF {"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.704 189489 DEBUG nova.network.os_vif_util [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.704 189489 DEBUG os_vif [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.705 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.706 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.706 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.710 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.710 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapedefdb98-b9, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.711 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapedefdb98-b9, col_values=(('external_ids', {'iface-id': 'edefdb98-b93f-44d4-b001-9327ca3fbfd5', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:dc:b3:bc', 'vm-uuid': '857c831e-16aa-4908-8b4d-bf6fc64b8b23'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.713 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:14 compute-0 NetworkManager[56360]: <info>  [1764431414.7143] manager: (tapedefdb98-b9): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.715 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.725 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.726 189489 INFO os_vif [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9')#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.790 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.792 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.792 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] No VIF found with MAC fa:16:3e:dc:b3:bc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:50:14 compute-0 nova_compute[189485]: 2025-11-29 15:50:14.793 189489 INFO nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Using config drive#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.074 189489 DEBUG nova.network.neutron [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Successfully created port: 471b576d-abd9-4813-915c-33fdffb4ae94 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.082 189489 DEBUG nova.compute.manager [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-changed-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.083 189489 DEBUG nova.compute.manager [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Refreshing instance network info cache due to event network-changed-edefdb98-b93f-44d4-b001-9327ca3fbfd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.083 189489 DEBUG oslo_concurrency.lockutils [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.083 189489 DEBUG oslo_concurrency.lockutils [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.084 189489 DEBUG nova.network.neutron [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Refreshing network info cache for port edefdb98-b93f-44d4-b001-9327ca3fbfd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.304 189489 INFO nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Creating config drive at /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.config#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.309 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjvaas_cj execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.434 189489 DEBUG oslo_concurrency.processutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjvaas_cj" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.507 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.507 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.507 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.508 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.508 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:50:15 compute-0 kernel: tapedefdb98-b9: entered promiscuous mode
Nov 29 15:50:15 compute-0 NetworkManager[56360]: <info>  [1764431415.5337] manager: (tapedefdb98-b9): new Tun device (/org/freedesktop/NetworkManager/Devices/42)
Nov 29 15:50:15 compute-0 ovn_controller[97827]: 2025-11-29T15:50:15Z|00076|binding|INFO|Claiming lport edefdb98-b93f-44d4-b001-9327ca3fbfd5 for this chassis.
Nov 29 15:50:15 compute-0 ovn_controller[97827]: 2025-11-29T15:50:15Z|00077|binding|INFO|edefdb98-b93f-44d4-b001-9327ca3fbfd5: Claiming fa:16:3e:dc:b3:bc 10.100.0.10
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.540 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.551 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:b3:bc 10.100.0.10'], port_security=['fa:16:3e:dc:b3:bc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '857c831e-16aa-4908-8b4d-bf6fc64b8b23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da0a31ff-8236-4651-927c-b129d61fb520', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a2c00b2ea684b44ae64ef5a0dedb9db', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'c1a8d723-a8a5-4310-a62a-e1ff09806eca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a234e3-54be-49c8-9254-7f5360cba0d3, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=edefdb98-b93f-44d4-b001-9327ca3fbfd5) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.552 106713 INFO neutron.agent.ovn.metadata.agent [-] Port edefdb98-b93f-44d4-b001-9327ca3fbfd5 in datapath da0a31ff-8236-4651-927c-b129d61fb520 bound to our chassis#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.554 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network da0a31ff-8236-4651-927c-b129d61fb520#033[00m
Nov 29 15:50:15 compute-0 ovn_controller[97827]: 2025-11-29T15:50:15Z|00078|binding|INFO|Setting lport edefdb98-b93f-44d4-b001-9327ca3fbfd5 ovn-installed in OVS
Nov 29 15:50:15 compute-0 ovn_controller[97827]: 2025-11-29T15:50:15Z|00079|binding|INFO|Setting lport edefdb98-b93f-44d4-b001-9327ca3fbfd5 up in Southbound
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.566 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.572 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f741e064-d275-4d56-acab-9d311116134b]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.573 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapda0a31ff-81 in ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.576 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapda0a31ff-80 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.576 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[13deadeb-67e5-4f03-915e-edf62fcbfe76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.577 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[fe6ded0e-6bb0-4af8-9126-245de42c9797]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.579 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.590 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.594 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[f15ec122-de25-4252-b1e9-caea8cae255d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 systemd-machined[155802]: New machine qemu-8-instance-00000008.
Nov 29 15:50:15 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Nov 29 15:50:15 compute-0 systemd-udevd[251130]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.631 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a1e08628-6f33-465d-a69f-3536db6cef84]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 NetworkManager[56360]: <info>  [1764431415.6459] device (tapedefdb98-b9): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:50:15 compute-0 NetworkManager[56360]: <info>  [1764431415.6476] device (tapedefdb98-b9): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.674 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[6d5f8196-d554-4837-8558-6a01862f22e0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 systemd-udevd[251133]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.681 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[160a2a16-c0e2-4be6-812c-c22f1807e874]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 NetworkManager[56360]: <info>  [1764431415.6841] manager: (tapda0a31ff-80): new Veth device (/org/freedesktop/NetworkManager/Devices/43)
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.711 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[f9e71a6d-86dc-450d-92cc-a4041f78cade]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.713 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[e9bdac45-c2b6-40d7-a3b4-cf7237809b21]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 NetworkManager[56360]: <info>  [1764431415.7344] device (tapda0a31ff-80): carrier: link connected
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.741 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[bec2885d-4690-4192-976c-4ce97519bd47]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.761 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[efbcd2e5-81f1-48a3-bbb5-310819fa3927]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda0a31ff-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:68:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517138, 'reachable_time': 41387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251160, 'error': None, 'target': 'ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.777 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[afdbabde-f890-4d8d-a270-7dd1c4a236fe]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe4e:681e'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517138, 'tstamp': 517138}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251161, 'error': None, 'target': 'ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.797 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[c181e221-6713-42e3-a0f2-a47bb9377308]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapda0a31ff-81'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:4e:68:1e'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 3, 'tx_packets': 1, 'rx_bytes': 266, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517138, 'reachable_time': 41387, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 3, 'inoctets': 224, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 3, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 224, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 3, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251162, 'error': None, 'target': 'ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.831 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[48e2e5bc-7abb-4d10-b105-d379055ba35f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.895 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[cb779dc0-e950-475a-980c-7f7c812368a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.897 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda0a31ff-80, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.897 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.897 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapda0a31ff-80, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:15 compute-0 kernel: tapda0a31ff-80: entered promiscuous mode
Nov 29 15:50:15 compute-0 NetworkManager[56360]: <info>  [1764431415.9007] manager: (tapda0a31ff-80): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/44)
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.899 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.901 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.905 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapda0a31ff-80, col_values=(('external_ids', {'iface-id': '6fd5af9f-807d-4404-8d7e-106bc3b2230a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.905 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 ovn_controller[97827]: 2025-11-29T15:50:15Z|00080|binding|INFO|Releasing lport 6fd5af9f-807d-4404-8d7e-106bc3b2230a from this chassis (sb_readonly=0)
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.918 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.920 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.920 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/da0a31ff-8236-4651-927c-b129d61fb520.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/da0a31ff-8236-4651-927c-b129d61fb520.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.921 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[cb6aa4fd-57be-4b9a-919e-ca43f80c0013]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.922 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-da0a31ff-8236-4651-927c-b129d61fb520
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/da0a31ff-8236-4651-927c-b129d61fb520.pid.haproxy
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID da0a31ff-8236-4651-927c-b129d61fb520
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:50:15 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:15.923 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520', 'env', 'PROCESS_TAG=haproxy-da0a31ff-8236-4651-927c-b129d61fb520', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/da0a31ff-8236-4651-927c-b129d61fb520.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.959 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431415.9583576, 857c831e-16aa-4908-8b4d-bf6fc64b8b23 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.960 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] VM Started (Lifecycle Event)#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.985 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.991 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431415.958461, 857c831e-16aa-4908-8b4d-bf6fc64b8b23 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:15 compute-0 nova_compute[189485]: 2025-11-29 15:50:15.992 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.010 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.016 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.041 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:16 compute-0 podman[251200]: 2025-11-29 15:50:16.309566771 +0000 UTC m=+0.056171760 container create a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 15:50:16 compute-0 systemd[1]: Started libpod-conmon-a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10.scope.
Nov 29 15:50:16 compute-0 podman[251200]: 2025-11-29 15:50:16.28086349 +0000 UTC m=+0.027468419 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:50:16 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:50:16 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/31b6646003fb1a564cf1bc9640e0b5234fdf10282007a7458028ce7514388f44/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:50:16 compute-0 podman[251200]: 2025-11-29 15:50:16.428532699 +0000 UTC m=+0.175137648 container init a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 15:50:16 compute-0 podman[251200]: 2025-11-29 15:50:16.43675064 +0000 UTC m=+0.183355589 container start a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.456 189489 DEBUG nova.network.neutron [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Successfully updated port: 471b576d-abd9-4813-915c-33fdffb4ae94 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:50:16 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [NOTICE]   (251219) : New worker (251221) forked
Nov 29 15:50:16 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [NOTICE]   (251219) : Loading success.
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.484 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.484 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquired lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.484 189489 DEBUG nova.network.neutron [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:50:16 compute-0 nova_compute[189485]: 2025-11-29 15:50:16.724 189489 DEBUG nova.network.neutron [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:50:17 compute-0 nova_compute[189485]: 2025-11-29 15:50:17.765 189489 DEBUG nova.network.neutron [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Updated VIF entry in instance network info cache for port edefdb98-b93f-44d4-b001-9327ca3fbfd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:17 compute-0 nova_compute[189485]: 2025-11-29 15:50:17.766 189489 DEBUG nova.network.neutron [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Updating instance_info_cache with network_info: [{"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:17 compute-0 nova_compute[189485]: 2025-11-29 15:50:17.796 189489 DEBUG oslo_concurrency.lockutils [req-f8f529a8-7c50-4c4e-b573-e2646ebd801e req-c0b51d95-61d0-47f4-8b68-7ce34bc9f26c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:18 compute-0 podman[251230]: 2025-11-29 15:50:18.644952243 +0000 UTC m=+0.075440099 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.688 189489 DEBUG nova.compute.manager [req-ba711b1c-7066-4276-abe9-d8fea9e735c5 req-42980dc2-66bc-4fc0-a1a2-78167ce21c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.689 189489 DEBUG oslo_concurrency.lockutils [req-ba711b1c-7066-4276-abe9-d8fea9e735c5 req-42980dc2-66bc-4fc0-a1a2-78167ce21c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.689 189489 DEBUG oslo_concurrency.lockutils [req-ba711b1c-7066-4276-abe9-d8fea9e735c5 req-42980dc2-66bc-4fc0-a1a2-78167ce21c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.690 189489 DEBUG oslo_concurrency.lockutils [req-ba711b1c-7066-4276-abe9-d8fea9e735c5 req-42980dc2-66bc-4fc0-a1a2-78167ce21c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.690 189489 DEBUG nova.compute.manager [req-ba711b1c-7066-4276-abe9-d8fea9e735c5 req-42980dc2-66bc-4fc0-a1a2-78167ce21c65 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Processing event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.691 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.695 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431418.6954885, 857c831e-16aa-4908-8b4d-bf6fc64b8b23 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.696 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.699 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.705 189489 INFO nova.virt.libvirt.driver [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Instance spawned successfully.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.706 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.708 189489 DEBUG nova.network.neutron [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updating instance_info_cache with network_info: [{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.753 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.754 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.754 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.755 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.756 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.757 189489 DEBUG nova.virt.libvirt.driver [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.765 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.767 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Releasing lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.768 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance network_info: |[{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.773 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Start _get_guest_xml network_info=[{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.779 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.787 189489 WARNING nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.815 189489 DEBUG nova.virt.libvirt.host [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.816 189489 DEBUG nova.virt.libvirt.host [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.819 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.825 189489 DEBUG nova.virt.libvirt.host [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.825 189489 DEBUG nova.virt.libvirt.host [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.826 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.826 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.826 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.827 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.827 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.827 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.827 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.828 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.828 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.828 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.828 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.828 189489 DEBUG nova.virt.hardware [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.831 189489 DEBUG nova.virt.libvirt.vif [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:50:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-153023418',display_name='tempest-ServerActionsTestJSON-server-153023418',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-153023418',id=9,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHe84/Vw1/UE6MjH9hSoZ8S+lF+m9Cdu9Av7vTw88OmQpmBt5taKTJ/r+cWSkzwOPRZEvDuFb+SsqaHgLTHP3NrHdnllgdosFCEIeqEnWDvyEA3QKG1liQQzPUp2/9l1bw==',key_name='tempest-keypair-106632266',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79e3732a895b43ce86538671ea9e7670',ramdisk_id='',reservation_id='r-7ix6aam2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1517137287',owner_user_name='tempest-ServerActionsTestJSON-1517137287-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b595faab5dfa4b4e9aff6a34b1473172',uuid=ea685573-5d12-4d41-8c8d-1d73dc63399d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.832 189489 DEBUG nova.network.os_vif_util [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converting VIF {"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.832 189489 DEBUG nova.network.os_vif_util [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.835 189489 DEBUG nova.objects.instance [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.852 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <uuid>ea685573-5d12-4d41-8c8d-1d73dc63399d</uuid>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <name>instance-00000009</name>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:name>tempest-ServerActionsTestJSON-server-153023418</nova:name>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:50:18</nova:creationTime>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:user uuid="b595faab5dfa4b4e9aff6a34b1473172">tempest-ServerActionsTestJSON-1517137287-project-member</nova:user>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:project uuid="79e3732a895b43ce86538671ea9e7670">tempest-ServerActionsTestJSON-1517137287</nova:project>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        <nova:port uuid="471b576d-abd9-4813-915c-33fdffb4ae94">
Nov 29 15:50:18 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <system>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <entry name="serial">ea685573-5d12-4d41-8c8d-1d73dc63399d</entry>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <entry name="uuid">ea685573-5d12-4d41-8c8d-1d73dc63399d</entry>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </system>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <os>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </os>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <features>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </features>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:b8:50:d3"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <target dev="tap471b576d-ab"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/console.log" append="off"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <video>
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </video>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:50:18 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:50:18 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:50:18 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:50:18 compute-0 nova_compute[189485]: </domain>
Nov 29 15:50:18 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.853 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Preparing to wait for external event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.853 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.853 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.853 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.854 189489 DEBUG nova.virt.libvirt.vif [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:50:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-153023418',display_name='tempest-ServerActionsTestJSON-server-153023418',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-153023418',id=9,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHe84/Vw1/UE6MjH9hSoZ8S+lF+m9Cdu9Av7vTw88OmQpmBt5taKTJ/r+cWSkzwOPRZEvDuFb+SsqaHgLTHP3NrHdnllgdosFCEIeqEnWDvyEA3QKG1liQQzPUp2/9l1bw==',key_name='tempest-keypair-106632266',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='79e3732a895b43ce86538671ea9e7670',ramdisk_id='',reservation_id='r-7ix6aam2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-1517137287',owner_user_name='tempest-ServerActionsTestJSON-1517137287-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:50:13Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b595faab5dfa4b4e9aff6a34b1473172',uuid=ea685573-5d12-4d41-8c8d-1d73dc63399d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.854 189489 DEBUG nova.network.os_vif_util [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converting VIF {"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.855 189489 DEBUG nova.network.os_vif_util [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.855 189489 DEBUG os_vif [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.856 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.856 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.856 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.858 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.859 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap471b576d-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.859 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap471b576d-ab, col_values=(('external_ids', {'iface-id': '471b576d-abd9-4813-915c-33fdffb4ae94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:50:d3', 'vm-uuid': 'ea685573-5d12-4d41-8c8d-1d73dc63399d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:18 compute-0 NetworkManager[56360]: <info>  [1764431418.8623] manager: (tap471b576d-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.864 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.865 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.870 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.870 189489 INFO os_vif [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab')#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.873 189489 INFO nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Took 8.32 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.874 189489 DEBUG nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.880 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.880 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.881 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.881 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.881 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Processing event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.881 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.882 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.882 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.882 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.882 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] No waiting events found dispatching network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.885 189489 WARNING nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received unexpected event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.885 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.885 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.885 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.886 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.886 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Processing event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.886 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.886 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.886 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.887 189489 DEBUG oslo_concurrency.lockutils [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.887 189489 DEBUG nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] No waiting events found dispatching network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.887 189489 WARNING nova.compute.manager [req-8452fff0-0ad9-4592-8f9e-467ef7a89f97 req-e61539c9-7a31-4f23-a614-e1e4177849b3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received unexpected event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.893 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Instance event wait completed in 10 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.893 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Instance event wait completed in 6 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.904 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.905 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.905 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431418.9046197, 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.905 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.918 189489 INFO nova.virt.libvirt.driver [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Instance spawned successfully.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.919 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.923 189489 INFO nova.virt.libvirt.driver [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Instance spawned successfully.#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.925 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.948 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.957 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.963 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.964 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.964 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.964 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.965 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.965 189489 DEBUG nova.virt.libvirt.driver [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.978 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.979 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.979 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.979 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.980 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:18 compute-0 nova_compute[189485]: 2025-11-29 15:50:18.980 189489 DEBUG nova.virt.libvirt.driver [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.001 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.001 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.001 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] No VIF found with MAC fa:16:3e:b8:50:d3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.001 189489 INFO nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Using config drive#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.013 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.013 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431418.904708, a8fbb028-7553-448d-8ee5-e0b34ade7315 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.013 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.052 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.056 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.078 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.080 189489 INFO nova.compute.manager [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Took 10.66 seconds to build instance.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.084 189489 INFO nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Took 17.79 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.084 189489 DEBUG nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.096 189489 INFO nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Took 16.48 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.096 189489 DEBUG nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.103 189489 DEBUG oslo_concurrency.lockutils [None req-74441e94-897d-4fef-b47b-fa95214d8162 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.785s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.173 189489 INFO nova.compute.manager [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Took 17.26 seconds to build instance.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.177 189489 INFO nova.compute.manager [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Took 18.34 seconds to build instance.#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.196 189489 DEBUG oslo_concurrency.lockutils [None req-b15f2e72-783f-4394-84e0-46b375a9e8ea 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.463s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.207 189489 DEBUG oslo_concurrency.lockutils [None req-f22fc99b-a73d-4f9e-a38b-82ab50ba64e3 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 17.369s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.462 189489 INFO nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Creating config drive at /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.476 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznnetij1 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.607 189489 DEBUG oslo_concurrency.processutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpznnetij1" returned: 0 in 0.131s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:19 compute-0 NetworkManager[56360]: <info>  [1764431419.6719] manager: (tap471b576d-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/46)
Nov 29 15:50:19 compute-0 kernel: tap471b576d-ab: entered promiscuous mode
Nov 29 15:50:19 compute-0 ovn_controller[97827]: 2025-11-29T15:50:19Z|00081|binding|INFO|Claiming lport 471b576d-abd9-4813-915c-33fdffb4ae94 for this chassis.
Nov 29 15:50:19 compute-0 ovn_controller[97827]: 2025-11-29T15:50:19Z|00082|binding|INFO|471b576d-abd9-4813-915c-33fdffb4ae94: Claiming fa:16:3e:b8:50:d3 10.100.0.11
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.686 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.701 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:50:d3 10.100.0.11'], port_security=['fa:16:3e:b8:50:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ea685573-5d12-4d41-8c8d-1d73dc63399d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79e3732a895b43ce86538671ea9e7670', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd8e2a464-eef4-4c41-a809-d94caef28d98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02d3693f-5198-43ab-859b-ff500142407c, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=471b576d-abd9-4813-915c-33fdffb4ae94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.703 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 471b576d-abd9-4813-915c-33fdffb4ae94 in datapath 29b0dade-4512-451e-9fdc-1b8d13fd5972 bound to our chassis#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.706 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 29b0dade-4512-451e-9fdc-1b8d13fd5972#033[00m
Nov 29 15:50:19 compute-0 systemd-udevd[251272]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:19 compute-0 ovn_controller[97827]: 2025-11-29T15:50:19Z|00083|binding|INFO|Setting lport 471b576d-abd9-4813-915c-33fdffb4ae94 ovn-installed in OVS
Nov 29 15:50:19 compute-0 ovn_controller[97827]: 2025-11-29T15:50:19Z|00084|binding|INFO|Setting lport 471b576d-abd9-4813-915c-33fdffb4ae94 up in Southbound
Nov 29 15:50:19 compute-0 nova_compute[189485]: 2025-11-29 15:50:19.720 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:19 compute-0 NetworkManager[56360]: <info>  [1764431419.7232] device (tap471b576d-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:50:19 compute-0 NetworkManager[56360]: <info>  [1764431419.7238] device (tap471b576d-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.721 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[979d4467-8d4a-4825-8868-228361204fc9]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.723 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap29b0dade-41 in ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.727 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap29b0dade-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.727 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[94ec2971-8715-4da2-91a1-d75ef5f5920e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.730 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[fe226859-1649-4fd8-84de-d5747a4a3e70]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 systemd-machined[155802]: New machine qemu-9-instance-00000009.
Nov 29 15:50:19 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.761 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[feb38451-d3a6-414d-a4ca-50c3d1e26281]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.789 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1df3fad9-e9f5-462c-8834-5d84ef614b92]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.835 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[33596343-1f1b-4ab6-a33d-cf194458030a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 systemd-udevd[251278]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:50:19 compute-0 NetworkManager[56360]: <info>  [1764431419.8439] manager: (tap29b0dade-40): new Veth device (/org/freedesktop/NetworkManager/Devices/47)
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.842 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d52ac449-c494-4181-8a35-688b93dab53f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.885 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[599f5f0c-9dab-48aa-b169-86f135f9f711]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.889 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[7d32c8a6-f8bc-409b-83c1-a18f01b78d9c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 NetworkManager[56360]: <info>  [1764431419.9158] device (tap29b0dade-40): carrier: link connected
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.923 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[8300a968-7343-4ba4-90e3-b8c42e51c24e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.943 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[694fb734-188b-434c-8e0d-46d752343a83]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29b0dade-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:85:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517556, 'reachable_time': 38694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251308, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.957 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[96348326-74ab-4149-919d-e0508249c767]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:85c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 517556, 'tstamp': 517556}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251310, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:19 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:19.974 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6e2dbd8f-714d-42ee-8442-1589f5346c64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29b0dade-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:85:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 27], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517556, 'reachable_time': 38694, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251311, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.012 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9a994c6a-0b90-4eed-a8ab-e4927a00e44e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.091 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f9a53ffd-a5e1-488f-96d6-b79878a72721]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.094 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b0dade-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.094 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.095 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29b0dade-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:20 compute-0 kernel: tap29b0dade-40: entered promiscuous mode
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.098 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:20 compute-0 NetworkManager[56360]: <info>  [1764431420.1003] manager: (tap29b0dade-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.100 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.110 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap29b0dade-40, col_values=(('external_ids', {'iface-id': '0c9e125e-3b1f-4aef-b336-cdad32359771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.112 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.113 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:20 compute-0 ovn_controller[97827]: 2025-11-29T15:50:20Z|00085|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.114 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/29b0dade-4512-451e-9fdc-1b8d13fd5972.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/29b0dade-4512-451e-9fdc-1b8d13fd5972.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.129 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ae0d6438-59c6-427d-8213-9114d452b376]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.131 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-29b0dade-4512-451e-9fdc-1b8d13fd5972
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/29b0dade-4512-451e-9fdc-1b8d13fd5972.pid.haproxy
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 29b0dade-4512-451e-9fdc-1b8d13fd5972
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:50:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:20.132 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'env', 'PROCESS_TAG=haproxy-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/29b0dade-4512-451e-9fdc-1b8d13fd5972.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.133 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.307 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431420.306707, ea685573-5d12-4d41-8c8d-1d73dc63399d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.307 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] VM Started (Lifecycle Event)#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.332 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.337 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431420.3068163, ea685573-5d12-4d41-8c8d-1d73dc63399d => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.338 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.358 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.365 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.383 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:20 compute-0 nova_compute[189485]: 2025-11-29 15:50:20.593 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:20 compute-0 podman[251348]: 2025-11-29 15:50:20.656219344 +0000 UTC m=+0.091419389 container create 5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 15:50:20 compute-0 systemd[1]: Started libpod-conmon-5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca.scope.
Nov 29 15:50:20 compute-0 podman[251348]: 2025-11-29 15:50:20.616830144 +0000 UTC m=+0.052030229 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:50:20 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:50:20 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/57cd72c40a2623cd9019fa8c8e3bb08afffd1707aead34290bcca445d1a5d026/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:50:20 compute-0 podman[251348]: 2025-11-29 15:50:20.755108331 +0000 UTC m=+0.190308386 container init 5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:50:20 compute-0 podman[251348]: 2025-11-29 15:50:20.764550325 +0000 UTC m=+0.199750360 container start 5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:50:20 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [NOTICE]   (251367) : New worker (251369) forked
Nov 29 15:50:20 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [NOTICE]   (251367) : Loading success.
Nov 29 15:50:21 compute-0 nova_compute[189485]: 2025-11-29 15:50:21.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:21 compute-0 nova_compute[189485]: 2025-11-29 15:50:21.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.788 189489 DEBUG nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.788 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.788 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.788 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 DEBUG nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] No waiting events found dispatching network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 WARNING nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received unexpected event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 DEBUG nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-changed-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 DEBUG nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Refreshing instance network info cache due to event network-changed-471b576d-abd9-4813-915c-33fdffb4ae94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:22 compute-0 nova_compute[189485]: 2025-11-29 15:50:22.789 189489 DEBUG nova.network.neutron [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Refreshing network info cache for port 471b576d-abd9-4813-915c-33fdffb4ae94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:23 compute-0 nova_compute[189485]: 2025-11-29 15:50:23.478 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:23 compute-0 nova_compute[189485]: 2025-11-29 15:50:23.861 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.644 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.700 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.701 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.762 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.767 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.827 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.831 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.892 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.898 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.954 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:24 compute-0 nova_compute[189485]: 2025-11-29 15:50:24.955 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.024 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.030 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.086 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.087 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.142 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.595 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.631 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.632 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4921MB free_disk=72.33706665039062GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.633 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.634 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.647 189489 DEBUG nova.network.neutron [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updated VIF entry in instance network info cache for port 471b576d-abd9-4813-915c-33fdffb4ae94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.648 189489 DEBUG nova.network.neutron [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updating instance_info_cache with network_info: [{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.962 189489 DEBUG nova.compute.manager [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.977 189489 DEBUG oslo_concurrency.lockutils [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.978 189489 DEBUG oslo_concurrency.lockutils [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.981 189489 DEBUG oslo_concurrency.lockutils [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.982 189489 DEBUG nova.compute.manager [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Processing event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.983 189489 DEBUG nova.compute.manager [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-changed-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.983 189489 DEBUG nova.compute.manager [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Refreshing instance network info cache due to event network-changed-b14cc28b-87b6-499b-abf4-437c4c5d74e9. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.984 189489 DEBUG oslo_concurrency.lockutils [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.984 189489 DEBUG oslo_concurrency.lockutils [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.984 189489 DEBUG nova.network.neutron [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Refreshing network info cache for port b14cc28b-87b6-499b-abf4-437c4c5d74e9 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.986 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance event wait completed in 5 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.992 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431425.9919417, ea685573-5d12-4d41-8c8d-1d73dc63399d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.992 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:50:25 compute-0 nova_compute[189485]: 2025-11-29 15:50:25.995 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.002 189489 INFO nova.virt.libvirt.driver [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance spawned successfully.#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.003 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.035 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.036 189489 DEBUG nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.036 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.037 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.038 189489 DEBUG oslo_concurrency.lockutils [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.038 189489 DEBUG nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.039 189489 WARNING nova.compute.manager [req-3aaae485-3f9b-4c93-82b2-6a1a6a1a0772 req-8c8028ff-0da6-404f-87d1-c8964310e93b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received unexpected event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.044 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.056 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.061 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.062 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.063 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.063 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.064 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.065 189489 DEBUG nova.virt.libvirt.driver [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.081 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.375 189489 INFO nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Took 12.83 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.377 189489 DEBUG nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.386 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.387 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a8fbb028-7553-448d-8ee5-e0b34ade7315 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.388 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 857c831e-16aa-4908-8b4d-bf6fc64b8b23 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.389 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance ea685573-5d12-4d41-8c8d-1d73dc63399d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.390 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.391 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.495 189489 INFO nova.compute.manager [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Took 13.59 seconds to build instance.#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.523 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.529 189489 DEBUG oslo_concurrency.lockutils [None req-82758e2b-574e-4157-a9b7-888efa795edd b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.692s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.542 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.949 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:50:26 compute-0 nova_compute[189485]: 2025-11-29 15:50:26.950 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.256 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.258 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.258 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.259 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.259 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.261 189489 INFO nova.compute.manager [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Terminating instance#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.262 189489 DEBUG nova.compute.manager [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:50:27 compute-0 kernel: tapb14cc28b-87 (unregistering): left promiscuous mode
Nov 29 15:50:27 compute-0 NetworkManager[56360]: <info>  [1764431427.2897] device (tapb14cc28b-87): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:50:27 compute-0 ovn_controller[97827]: 2025-11-29T15:50:27Z|00086|binding|INFO|Releasing lport b14cc28b-87b6-499b-abf4-437c4c5d74e9 from this chassis (sb_readonly=0)
Nov 29 15:50:27 compute-0 ovn_controller[97827]: 2025-11-29T15:50:27Z|00087|binding|INFO|Setting lport b14cc28b-87b6-499b-abf4-437c4c5d74e9 down in Southbound
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.300 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 ovn_controller[97827]: 2025-11-29T15:50:27Z|00088|binding|INFO|Removing iface tapb14cc28b-87 ovn-installed in OVS
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.323 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.324 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:a4:6b:f2 10.100.0.13'], port_security=['fa:16:3e:a4:6b:f2 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': '43c7acb1-c172-4f2d-ad8a-9a0bb198e80b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'd35f91af89d64c66961a06f6336a059e', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6e4ac110-4ab3-4d40-9195-92dcc114d1de', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.239'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4c1247f5-290f-4d1e-bac9-b6f672583a0a, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=b14cc28b-87b6-499b-abf4-437c4c5d74e9) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.325 106713 INFO neutron.agent.ovn.metadata.agent [-] Port b14cc28b-87b6-499b-abf4-437c4c5d74e9 in datapath c94a881a-57d6-46f7-892d-0f7cbde5b879 unbound from our chassis#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.327 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c94a881a-57d6-46f7-892d-0f7cbde5b879, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.329 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a68cfb48-a95e-48d5-805f-d6194079f062]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.330 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879 namespace which is not needed anymore#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.332 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Deactivated successfully.
Nov 29 15:50:27 compute-0 systemd[1]: machine-qemu\x2d6\x2dinstance\x2d00000006.scope: Consumed 8.810s CPU time.
Nov 29 15:50:27 compute-0 systemd-machined[155802]: Machine qemu-6-instance-00000006 terminated.
Nov 29 15:50:27 compute-0 podman[251403]: 2025-11-29 15:50:27.443777303 +0000 UTC m=+0.129355007 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:50:27 compute-0 kernel: tapb14cc28b-87: entered promiscuous mode
Nov 29 15:50:27 compute-0 NetworkManager[56360]: <info>  [1764431427.4900] manager: (tapb14cc28b-87): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Nov 29 15:50:27 compute-0 kernel: tapb14cc28b-87 (unregistering): left promiscuous mode
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.499 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [NOTICE]   (250943) : haproxy version is 2.8.14-c23fe91
Nov 29 15:50:27 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [NOTICE]   (250943) : path to executable is /usr/sbin/haproxy
Nov 29 15:50:27 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [WARNING]  (250943) : Exiting Master process...
Nov 29 15:50:27 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [ALERT]    (250943) : Current worker (250945) exited with code 143 (Terminated)
Nov 29 15:50:27 compute-0 neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879[250939]: [WARNING]  (250943) : All workers exited. Exiting... (0)
Nov 29 15:50:27 compute-0 systemd[1]: libpod-cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae.scope: Deactivated successfully.
Nov 29 15:50:27 compute-0 podman[251442]: 2025-11-29 15:50:27.521226485 +0000 UTC m=+0.080946326 container died cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.547 189489 INFO nova.virt.libvirt.driver [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Instance destroyed successfully.#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.547 189489 DEBUG nova.objects.instance [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lazy-loading 'resources' on Instance uuid 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:27 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae-userdata-shm.mount: Deactivated successfully.
Nov 29 15:50:27 compute-0 systemd[1]: var-lib-containers-storage-overlay-b3e756fde4a24e0700f2a70c8d2ffc495d1f4c0e23942eaed65d64deabfd747e-merged.mount: Deactivated successfully.
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.579 189489 DEBUG nova.virt.libvirt.vif [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:49:59Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-1605699510',display_name='tempest-ServersTestManualDisk-server-1605699510',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-1605699510',id=6,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCfDfrDOPJYWP2EHBy3CBtFXg7Owmc5VEuPgEukF1W4A69Nclda30Sjqrhsp79oOu3o1Xlha7m2bmDQuLhLOWks+GDUR8c0BtZ+CkGB8jqOwUERhFh1Vmwu+vmkFUjvilw==',key_name='tempest-keypair-421912273',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='d35f91af89d64c66961a06f6336a059e',ramdisk_id='',reservation_id='r-14a985by',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-997126101',owner_user_name='tempest-ServersTestManualDisk-997126101-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:50:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='90e4f977a2394cadad716cb5d7194e56',uuid=43c7acb1-c172-4f2d-ad8a-9a0bb198e80b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.580 189489 DEBUG nova.network.os_vif_util [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Converting VIF {"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.580 189489 DEBUG nova.network.os_vif_util [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.581 189489 DEBUG os_vif [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.586 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.587 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb14cc28b-87, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.593 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.595 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:50:27 compute-0 podman[251442]: 2025-11-29 15:50:27.596096537 +0000 UTC m=+0.155816378 container cleanup cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.598 189489 INFO os_vif [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:a4:6b:f2,bridge_name='br-int',has_traffic_filtering=True,id=b14cc28b-87b6-499b-abf4-437c4c5d74e9,network=Network(c94a881a-57d6-46f7-892d-0f7cbde5b879),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapb14cc28b-87')#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.599 189489 INFO nova.virt.libvirt.driver [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Deleting instance files /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b_del#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.600 189489 INFO nova.virt.libvirt.driver [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Deletion of /var/lib/nova/instances/43c7acb1-c172-4f2d-ad8a-9a0bb198e80b_del complete#033[00m
Nov 29 15:50:27 compute-0 systemd[1]: libpod-conmon-cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae.scope: Deactivated successfully.
Nov 29 15:50:27 compute-0 podman[251485]: 2025-11-29 15:50:27.674508655 +0000 UTC m=+0.052652067 container remove cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.682 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ff69b860-d11b-4c01-a671-2ad37b9bc5ce]: (4, ('Sat Nov 29 03:50:27 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879 (cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae)\ncc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae\nSat Nov 29 03:50:27 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879 (cc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae)\ncc6239c960a7d3e875f2f4aa21eeac4eb59ff12e56d5c8ffa96591afec27c2ae\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.684 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[da5123c6-8cbc-4b15-9dbb-a6d8d033dbd0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.686 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc94a881a-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.688 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 kernel: tapc94a881a-50: left promiscuous mode
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.693 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.699 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[c14ebce6-fc6c-4a4d-ac77-20ba1cdcc32d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.705 189489 INFO nova.compute.manager [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Took 0.44 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.706 189489 DEBUG oslo.service.loopingcall [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.706 189489 DEBUG nova.compute.manager [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.707 189489 DEBUG nova.network.neutron [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:50:27 compute-0 nova_compute[189485]: 2025-11-29 15:50:27.717 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.726 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[20c62a3c-9b43-46a7-adf6-33829b16cc9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.728 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d6f4fdcb-049d-4399-aad1-01b71f391e64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.748 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1bcdbe51-83a3-4139-9f18-2ea62059eb26]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516387, 'reachable_time': 30648, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251496, 'error': None, 'target': 'ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:27 compute-0 systemd[1]: run-netns-ovnmeta\x2dc94a881a\x2d57d6\x2d46f7\x2d892d\x2d0f7cbde5b879.mount: Deactivated successfully.
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.756 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c94a881a-57d6-46f7-892d-0f7cbde5b879 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:50:27 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:27.756 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[f9c316a1-eca4-4958-80d8-f1d8bbae0358]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:28 compute-0 nova_compute[189485]: 2025-11-29 15:50:28.951 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:50:28 compute-0 nova_compute[189485]: 2025-11-29 15:50:28.952 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.041 189489 DEBUG nova.compute.manager [req-00894925-fb40-4c38-a747-d59d80e2911d req-8f92798b-c540-453e-8c5c-c3ed128d0fc6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-vif-unplugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.042 189489 DEBUG oslo_concurrency.lockutils [req-00894925-fb40-4c38-a747-d59d80e2911d req-8f92798b-c540-453e-8c5c-c3ed128d0fc6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.042 189489 DEBUG oslo_concurrency.lockutils [req-00894925-fb40-4c38-a747-d59d80e2911d req-8f92798b-c540-453e-8c5c-c3ed128d0fc6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.043 189489 DEBUG oslo_concurrency.lockutils [req-00894925-fb40-4c38-a747-d59d80e2911d req-8f92798b-c540-453e-8c5c-c3ed128d0fc6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.043 189489 DEBUG nova.compute.manager [req-00894925-fb40-4c38-a747-d59d80e2911d req-8f92798b-c540-453e-8c5c-c3ed128d0fc6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] No waiting events found dispatching network-vif-unplugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.044 189489 DEBUG nova.compute.manager [req-00894925-fb40-4c38-a747-d59d80e2911d req-8f92798b-c540-453e-8c5c-c3ed128d0fc6 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-vif-unplugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.485 189489 DEBUG nova.compute.manager [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-changed-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.485 189489 DEBUG nova.compute.manager [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Refreshing instance network info cache due to event network-changed-edefdb98-b93f-44d4-b001-9327ca3fbfd5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.486 189489 DEBUG oslo_concurrency.lockutils [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.486 189489 DEBUG oslo_concurrency.lockutils [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.487 189489 DEBUG nova.network.neutron [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Refreshing network info cache for port edefdb98-b93f-44d4-b001-9327ca3fbfd5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:29 compute-0 podman[251499]: 2025-11-29 15:50:29.679342172 +0000 UTC m=+0.111161088 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 15:50:29 compute-0 podman[251500]: 2025-11-29 15:50:29.684921162 +0000 UTC m=+0.122141133 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Nov 29 15:50:29 compute-0 podman[251498]: 2025-11-29 15:50:29.702055183 +0000 UTC m=+0.125415742 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., version=9.4, container_name=kepler, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Nov 29 15:50:29 compute-0 podman[251502]: 2025-11-29 15:50:29.707950351 +0000 UTC m=+0.124667592 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, release=1755695350)
Nov 29 15:50:29 compute-0 podman[251501]: 2025-11-29 15:50:29.712573766 +0000 UTC m=+0.147949058 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:50:29 compute-0 podman[203677]: time="2025-11-29T15:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:50:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 31990 "" "Go-http-client/1.1"
Nov 29 15:50:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5707 "" "Go-http-client/1.1"
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.984 189489 DEBUG nova.network.neutron [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Updated VIF entry in instance network info cache for port b14cc28b-87b6-499b-abf4-437c4c5d74e9. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:29 compute-0 nova_compute[189485]: 2025-11-29 15:50:29.985 189489 DEBUG nova.network.neutron [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Updating instance_info_cache with network_info: [{"id": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "address": "fa:16:3e:a4:6b:f2", "network": {"id": "c94a881a-57d6-46f7-892d-0f7cbde5b879", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-738321165-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.239", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "d35f91af89d64c66961a06f6336a059e", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb14cc28b-87", "ovs_interfaceid": "b14cc28b-87b6-499b-abf4-437c4c5d74e9", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.010 189489 DEBUG oslo_concurrency.lockutils [req-ed8af428-c054-4447-8ca7-6f75f446443b req-d9eaaff4-358e-493e-8af4-daa4cade07a3 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.330 189489 DEBUG nova.network.neutron [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.354 189489 INFO nova.compute.manager [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Took 2.65 seconds to deallocate network for instance.#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.404 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.404 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.529 189489 DEBUG nova.compute.provider_tree [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.545 189489 DEBUG nova.scheduler.client.report [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.567 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.162s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.588 189489 INFO nova.scheduler.client.report [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Deleted allocations for instance 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.598 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.680 189489 DEBUG oslo_concurrency.lockutils [None req-29e41e39-7c53-4de7-b24b-2af784630ad0 90e4f977a2394cadad716cb5d7194e56 d35f91af89d64c66961a06f6336a059e - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.423s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.776 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.776 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.777 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.777 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.777 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.778 189489 INFO nova.compute.manager [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Terminating instance#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.779 189489 DEBUG nova.compute.manager [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:50:30 compute-0 kernel: tapedefdb98-b9 (unregistering): left promiscuous mode
Nov 29 15:50:30 compute-0 NetworkManager[56360]: <info>  [1764431430.8098] device (tapedefdb98-b9): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:50:30 compute-0 ovn_controller[97827]: 2025-11-29T15:50:30Z|00089|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:50:30 compute-0 ovn_controller[97827]: 2025-11-29T15:50:30Z|00090|binding|INFO|Releasing lport ec3a721a-108a-4ae8-a5bc-85ed17fb9b58 from this chassis (sb_readonly=0)
Nov 29 15:50:30 compute-0 ovn_controller[97827]: 2025-11-29T15:50:30Z|00091|binding|INFO|Releasing lport 6fd5af9f-807d-4404-8d7e-106bc3b2230a from this chassis (sb_readonly=0)
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.836 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:30 compute-0 ovn_controller[97827]: 2025-11-29T15:50:30Z|00092|binding|INFO|Releasing lport edefdb98-b93f-44d4-b001-9327ca3fbfd5 from this chassis (sb_readonly=0)
Nov 29 15:50:30 compute-0 ovn_controller[97827]: 2025-11-29T15:50:30Z|00093|binding|INFO|Setting lport edefdb98-b93f-44d4-b001-9327ca3fbfd5 down in Southbound
Nov 29 15:50:30 compute-0 ovn_controller[97827]: 2025-11-29T15:50:30Z|00094|binding|INFO|Removing iface tapedefdb98-b9 ovn-installed in OVS
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.844 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:30.867 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:dc:b3:bc 10.100.0.10'], port_security=['fa:16:3e:dc:b3:bc 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': '857c831e-16aa-4908-8b4d-bf6fc64b8b23', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-da0a31ff-8236-4651-927c-b129d61fb520', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '8a2c00b2ea684b44ae64ef5a0dedb9db', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'c1a8d723-a8a5-4310-a62a-e1ff09806eca', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.223'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=12a234e3-54be-49c8-9254-7f5360cba0d3, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=edefdb98-b93f-44d4-b001-9327ca3fbfd5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:50:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:30.868 106713 INFO neutron.agent.ovn.metadata.agent [-] Port edefdb98-b93f-44d4-b001-9327ca3fbfd5 in datapath da0a31ff-8236-4651-927c-b129d61fb520 unbound from our chassis#033[00m
Nov 29 15:50:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:30.870 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network da0a31ff-8236-4651-927c-b129d61fb520, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:50:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:30.871 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[c8a603df-acab-4585-96fd-e2b769e14877]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:30.875 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520 namespace which is not needed anymore#033[00m
Nov 29 15:50:30 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Nov 29 15:50:30 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 12.539s CPU time.
Nov 29 15:50:30 compute-0 systemd-machined[155802]: Machine qemu-8-instance-00000008 terminated.
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.902 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:30 compute-0 nova_compute[189485]: 2025-11-29 15:50:30.910 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.003 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.011 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.042 189489 INFO nova.virt.libvirt.driver [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Instance destroyed successfully.#033[00m
Nov 29 15:50:31 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [NOTICE]   (251219) : haproxy version is 2.8.14-c23fe91
Nov 29 15:50:31 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [NOTICE]   (251219) : path to executable is /usr/sbin/haproxy
Nov 29 15:50:31 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [WARNING]  (251219) : Exiting Master process...
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.044 189489 DEBUG nova.objects.instance [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lazy-loading 'resources' on Instance uuid 857c831e-16aa-4908-8b4d-bf6fc64b8b23 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:50:31 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [ALERT]    (251219) : Current worker (251221) exited with code 143 (Terminated)
Nov 29 15:50:31 compute-0 neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520[251215]: [WARNING]  (251219) : All workers exited. Exiting... (0)
Nov 29 15:50:31 compute-0 systemd[1]: libpod-a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10.scope: Deactivated successfully.
Nov 29 15:50:31 compute-0 podman[251615]: 2025-11-29 15:50:31.056973712 +0000 UTC m=+0.070888287 container died a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125)
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.067 189489 DEBUG nova.virt.libvirt.vif [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:50:06Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-478947030',display_name='tempest-ServersTestJSON-server-478947030',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-478947030',id=8,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIbASp+Y2GFYtyctN4zFsXV4Yw34qHyoIxNYEUuBYoa1l4ucr5Hl8EX+a6am74YbwCLD1ae1Nlemi69FMS+F+Ji9q4w40jNt4jsb1ZVxWPnDlWf2tpRKugHBkvU+XKLSrg==',key_name='tempest-keypair-1803496096',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='8a2c00b2ea684b44ae64ef5a0dedb9db',ramdisk_id='',reservation_id='r-uegnxfgu',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-1871335564',owner_user_name='tempest-ServersTestJSON-1871335564-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:50:19Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='5ff5a7c4561f4a87aada601e5a4f9332',uuid=857c831e-16aa-4908-8b4d-bf6fc64b8b23,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.067 189489 DEBUG nova.network.os_vif_util [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Converting VIF {"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.068 189489 DEBUG nova.network.os_vif_util [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.070 189489 DEBUG os_vif [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.071 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.072 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapedefdb98-b9, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.078 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.080 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.083 189489 INFO os_vif [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:dc:b3:bc,bridge_name='br-int',has_traffic_filtering=True,id=edefdb98-b93f-44d4-b001-9327ca3fbfd5,network=Network(da0a31ff-8236-4651-927c-b129d61fb520),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapedefdb98-b9')#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.084 189489 INFO nova.virt.libvirt.driver [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Deleting instance files /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23_del#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.085 189489 INFO nova.virt.libvirt.driver [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Deletion of /var/lib/nova/instances/857c831e-16aa-4908-8b4d-bf6fc64b8b23_del complete#033[00m
Nov 29 15:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10-userdata-shm.mount: Deactivated successfully.
Nov 29 15:50:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-31b6646003fb1a564cf1bc9640e0b5234fdf10282007a7458028ce7514388f44-merged.mount: Deactivated successfully.
Nov 29 15:50:31 compute-0 podman[251615]: 2025-11-29 15:50:31.108272491 +0000 UTC m=+0.122187076 container cleanup a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 15:50:31 compute-0 systemd[1]: libpod-conmon-a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10.scope: Deactivated successfully.
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.145 189489 INFO nova.compute.manager [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Took 0.36 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.145 189489 DEBUG oslo.service.loopingcall [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.146 189489 DEBUG nova.compute.manager [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.146 189489 DEBUG nova.network.neutron [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:50:31 compute-0 podman[251660]: 2025-11-29 15:50:31.194621991 +0000 UTC m=+0.055641277 container remove a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.208 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[0d290e87-d850-4994-aa83-0cf63554c46a]: (4, ('Sat Nov 29 03:50:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520 (a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10)\na9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10\nSat Nov 29 03:50:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520 (a9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10)\na9d7144d4a551cfb4ad3dbcd8709dfe250d7d11ccc832b8e88867dbf93ef7b10\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.212 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9e956e41-37f6-44ed-9788-2a4ebcc78b0b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.213 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapda0a31ff-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.215 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 kernel: tapda0a31ff-80: left promiscuous mode
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.227 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.230 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.233 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[937c732b-2b19-45d3-8b5d-a4ce71f27046]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.258 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6620c93c-0a62-4463-947a-fb875ce4d0b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.259 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d5ab3553-94c0-4a43-a576-a800cc952841]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.276 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[4d01c2ff-a490-4adb-86d2-4bac9af3a06a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517131, 'reachable_time': 38385, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251674, 'error': None, 'target': 'ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 systemd[1]: run-netns-ovnmeta\x2dda0a31ff\x2d8236\x2d4651\x2d927c\x2db129d61fb520.mount: Deactivated successfully.
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.279 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-da0a31ff-8236-4651-927c-b129d61fb520 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:50:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:31.279 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[53411593-db34-4603-93df-8e9f101d5b3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: ERROR   15:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: ERROR   15:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: ERROR   15:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: ERROR   15:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:50:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.576 189489 DEBUG nova.compute.manager [req-ec932081-331e-40cc-946c-0d64c38459a0 req-dc4f510b-3c49-47dd-800e-597bb76ea4ff 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.577 189489 DEBUG oslo_concurrency.lockutils [req-ec932081-331e-40cc-946c-0d64c38459a0 req-dc4f510b-3c49-47dd-800e-597bb76ea4ff 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.577 189489 DEBUG oslo_concurrency.lockutils [req-ec932081-331e-40cc-946c-0d64c38459a0 req-dc4f510b-3c49-47dd-800e-597bb76ea4ff 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.577 189489 DEBUG oslo_concurrency.lockutils [req-ec932081-331e-40cc-946c-0d64c38459a0 req-dc4f510b-3c49-47dd-800e-597bb76ea4ff 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "43c7acb1-c172-4f2d-ad8a-9a0bb198e80b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.577 189489 DEBUG nova.compute.manager [req-ec932081-331e-40cc-946c-0d64c38459a0 req-dc4f510b-3c49-47dd-800e-597bb76ea4ff 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] No waiting events found dispatching network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.578 189489 WARNING nova.compute.manager [req-ec932081-331e-40cc-946c-0d64c38459a0 req-dc4f510b-3c49-47dd-800e-597bb76ea4ff 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received unexpected event network-vif-plugged-b14cc28b-87b6-499b-abf4-437c4c5d74e9 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.966 189489 DEBUG nova.compute.manager [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.967 189489 DEBUG nova.compute.manager [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing instance network info cache due to event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.968 189489 DEBUG oslo_concurrency.lockutils [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.969 189489 DEBUG oslo_concurrency.lockutils [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:31 compute-0 nova_compute[189485]: 2025-11-29 15:50:31.970 189489 DEBUG nova.network.neutron [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:32 compute-0 nova_compute[189485]: 2025-11-29 15:50:32.121 189489 DEBUG nova.network.neutron [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Updated VIF entry in instance network info cache for port edefdb98-b93f-44d4-b001-9327ca3fbfd5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:32 compute-0 nova_compute[189485]: 2025-11-29 15:50:32.123 189489 DEBUG nova.network.neutron [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Updating instance_info_cache with network_info: [{"id": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "address": "fa:16:3e:dc:b3:bc", "network": {"id": "da0a31ff-8236-4651-927c-b129d61fb520", "bridge": "br-int", "label": "tempest-ServersTestJSON-890978964-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.223", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "8a2c00b2ea684b44ae64ef5a0dedb9db", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapedefdb98-b9", "ovs_interfaceid": "edefdb98-b93f-44d4-b001-9327ca3fbfd5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:32 compute-0 nova_compute[189485]: 2025-11-29 15:50:32.145 189489 DEBUG oslo_concurrency.lockutils [req-89b4cb33-1e63-47c7-bd6f-2e5db120c83d req-b199babe-43ca-4ed8-b47a-04b318c1e909 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-857c831e-16aa-4908-8b4d-bf6fc64b8b23" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.346 189489 DEBUG nova.network.neutron [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.372 189489 INFO nova.compute.manager [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Took 2.23 seconds to deallocate network for instance.#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.426 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.427 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.537 189489 DEBUG nova.compute.provider_tree [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.563 189489 DEBUG nova.scheduler.client.report [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.595 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.168s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.659 189489 INFO nova.scheduler.client.report [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Deleted allocations for instance 857c831e-16aa-4908-8b4d-bf6fc64b8b23#033[00m
Nov 29 15:50:33 compute-0 podman[251675]: 2025-11-29 15:50:33.701575206 +0000 UTC m=+0.138477494 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.715 189489 DEBUG nova.compute.manager [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-changed-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.717 189489 DEBUG nova.compute.manager [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Refreshing instance network info cache due to event network-changed-471b576d-abd9-4813-915c-33fdffb4ae94. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.718 189489 DEBUG oslo_concurrency.lockutils [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.718 189489 DEBUG oslo_concurrency.lockutils [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.719 189489 DEBUG nova.network.neutron [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Refreshing network info cache for port 471b576d-abd9-4813-915c-33fdffb4ae94 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:50:33 compute-0 nova_compute[189485]: 2025-11-29 15:50:33.860 189489 DEBUG oslo_concurrency.lockutils [None req-860ed805-abc3-45ce-b3f1-5b5350bb4f1e 5ff5a7c4561f4a87aada601e5a4f9332 8a2c00b2ea684b44ae64ef5a0dedb9db - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.083s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.085 189489 DEBUG nova.compute.manager [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.086 189489 DEBUG oslo_concurrency.lockutils [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.087 189489 DEBUG oslo_concurrency.lockutils [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.088 189489 DEBUG oslo_concurrency.lockutils [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.088 189489 DEBUG nova.compute.manager [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] No waiting events found dispatching network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.089 189489 WARNING nova.compute.manager [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received unexpected event network-vif-plugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.089 189489 DEBUG nova.compute.manager [req-aed69b50-5175-407c-a31f-e2eb11e56cb8 req-fcbc52f5-2140-4cae-84a6-911e8b3c8466 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-vif-deleted-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.645 189489 DEBUG nova.network.neutron [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updated VIF entry in instance network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.646 189489 DEBUG nova.network.neutron [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.667 189489 DEBUG oslo_concurrency.lockutils [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.668 189489 DEBUG nova.compute.manager [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Received event network-vif-deleted-b14cc28b-87b6-499b-abf4-437c4c5d74e9 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.669 189489 DEBUG nova.compute.manager [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-vif-unplugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.669 189489 DEBUG oslo_concurrency.lockutils [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.670 189489 DEBUG oslo_concurrency.lockutils [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.671 189489 DEBUG oslo_concurrency.lockutils [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "857c831e-16aa-4908-8b4d-bf6fc64b8b23-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.671 189489 DEBUG nova.compute.manager [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] No waiting events found dispatching network-vif-unplugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:50:34 compute-0 nova_compute[189485]: 2025-11-29 15:50:34.672 189489 DEBUG nova.compute.manager [req-46e981bd-5bca-40ff-b78a-40de27f44cd7 req-afcd3eee-85ee-45d7-88e1-b40f510a19bb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Received event network-vif-unplugged-edefdb98-b93f-44d4-b001-9327ca3fbfd5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:50:35 compute-0 nova_compute[189485]: 2025-11-29 15:50:35.601 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:36 compute-0 nova_compute[189485]: 2025-11-29 15:50:36.073 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:36 compute-0 nova_compute[189485]: 2025-11-29 15:50:36.345 189489 DEBUG nova.network.neutron [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updated VIF entry in instance network info cache for port 471b576d-abd9-4813-915c-33fdffb4ae94. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:50:36 compute-0 nova_compute[189485]: 2025-11-29 15:50:36.346 189489 DEBUG nova.network.neutron [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updating instance_info_cache with network_info: [{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:50:36 compute-0 nova_compute[189485]: 2025-11-29 15:50:36.376 189489 DEBUG oslo_concurrency.lockutils [req-95ec5ff8-5384-48b0-a000-b4842109193b req-b027222a-5d90-421d-be23-616df4834aad 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:50:36 compute-0 podman[251693]: 2025-11-29 15:50:36.669975692 +0000 UTC m=+0.113861382 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:50:37 compute-0 ovn_controller[97827]: 2025-11-29T15:50:37Z|00095|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:50:37 compute-0 ovn_controller[97827]: 2025-11-29T15:50:37Z|00096|binding|INFO|Releasing lport ec3a721a-108a-4ae8-a5bc-85ed17fb9b58 from this chassis (sb_readonly=0)
Nov 29 15:50:37 compute-0 nova_compute[189485]: 2025-11-29 15:50:37.657 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:39 compute-0 ovn_controller[97827]: 2025-11-29T15:50:39Z|00097|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:50:39 compute-0 ovn_controller[97827]: 2025-11-29T15:50:39Z|00098|binding|INFO|Releasing lport ec3a721a-108a-4ae8-a5bc-85ed17fb9b58 from this chassis (sb_readonly=0)
Nov 29 15:50:39 compute-0 nova_compute[189485]: 2025-11-29 15:50:39.811 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:40 compute-0 ovn_controller[97827]: 2025-11-29T15:50:40Z|00099|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:50:40 compute-0 ovn_controller[97827]: 2025-11-29T15:50:40Z|00100|binding|INFO|Releasing lport ec3a721a-108a-4ae8-a5bc-85ed17fb9b58 from this chassis (sb_readonly=0)
Nov 29 15:50:40 compute-0 nova_compute[189485]: 2025-11-29 15:50:40.177 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:40 compute-0 nova_compute[189485]: 2025-11-29 15:50:40.605 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:41 compute-0 nova_compute[189485]: 2025-11-29 15:50:41.077 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:42 compute-0 nova_compute[189485]: 2025-11-29 15:50:42.541 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431427.539376, 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:42 compute-0 nova_compute[189485]: 2025-11-29 15:50:42.543 189489 INFO nova.compute.manager [-] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:50:43 compute-0 nova_compute[189485]: 2025-11-29 15:50:43.264 189489 DEBUG nova.compute.manager [None req-a9460c0f-c50e-4c3f-aec6-eac514a1d2b0 - - - - - -] [instance: 43c7acb1-c172-4f2d-ad8a-9a0bb198e80b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:45 compute-0 nova_compute[189485]: 2025-11-29 15:50:45.609 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:46 compute-0 nova_compute[189485]: 2025-11-29 15:50:46.036 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431431.0348878, 857c831e-16aa-4908-8b4d-bf6fc64b8b23 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:50:46 compute-0 nova_compute[189485]: 2025-11-29 15:50:46.037 189489 INFO nova.compute.manager [-] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:50:46 compute-0 nova_compute[189485]: 2025-11-29 15:50:46.080 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:46 compute-0 nova_compute[189485]: 2025-11-29 15:50:46.170 189489 DEBUG nova.compute.manager [None req-7ae853d1-ef8d-4f40-a9c5-ef57b1e85fca - - - - - -] [instance: 857c831e-16aa-4908-8b4d-bf6fc64b8b23] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:50:49 compute-0 podman[251717]: 2025-11-29 15:50:49.663484681 +0000 UTC m=+0.107190682 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:50:50 compute-0 nova_compute[189485]: 2025-11-29 15:50:50.612 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:50 compute-0 ovn_controller[97827]: 2025-11-29T15:50:50Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:27:bf:aa 10.100.0.9
Nov 29 15:50:50 compute-0 ovn_controller[97827]: 2025-11-29T15:50:50Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:27:bf:aa 10.100.0.9
Nov 29 15:50:51 compute-0 nova_compute[189485]: 2025-11-29 15:50:51.082 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:55 compute-0 nova_compute[189485]: 2025-11-29 15:50:55.614 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:56 compute-0 nova_compute[189485]: 2025-11-29 15:50:56.085 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:50:57 compute-0 podman[251760]: 2025-11-29 15:50:57.713488415 +0000 UTC m=+0.150774063 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 15:50:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:59.208 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:50:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:59.209 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:50:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:50:59.210 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:50:59 compute-0 podman[203677]: time="2025-11-29T15:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:50:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:50:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5255 "" "Go-http-client/1.1"
Nov 29 15:51:00 compute-0 ovn_controller[97827]: 2025-11-29T15:51:00Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:b8:50:d3 10.100.0.11
Nov 29 15:51:00 compute-0 ovn_controller[97827]: 2025-11-29T15:51:00Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:50:d3 10.100.0.11
Nov 29 15:51:00 compute-0 nova_compute[189485]: 2025-11-29 15:51:00.616 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:00 compute-0 podman[251788]: 2025-11-29 15:51:00.674200096 +0000 UTC m=+0.114323265 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, release-0.7.12=, version=9.4)
Nov 29 15:51:00 compute-0 podman[251790]: 2025-11-29 15:51:00.676461196 +0000 UTC m=+0.117343455 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:00 compute-0 podman[251789]: 2025-11-29 15:51:00.693005941 +0000 UTC m=+0.129254035 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:51:00 compute-0 podman[251792]: 2025-11-29 15:51:00.71789533 +0000 UTC m=+0.142161902 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, config_id=edpm, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 29 15:51:00 compute-0 podman[251791]: 2025-11-29 15:51:00.721972749 +0000 UTC m=+0.158465240 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:51:01 compute-0 nova_compute[189485]: 2025-11-29 15:51:01.090 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: ERROR   15:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: ERROR   15:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: ERROR   15:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: ERROR   15:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:51:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:51:02 compute-0 nova_compute[189485]: 2025-11-29 15:51:02.422 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:04 compute-0 nova_compute[189485]: 2025-11-29 15:51:04.079 189489 DEBUG nova.objects.instance [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lazy-loading 'flavor' on Instance uuid a8fbb028-7553-448d-8ee5-e0b34ade7315 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:04 compute-0 nova_compute[189485]: 2025-11-29 15:51:04.130 189489 DEBUG oslo_concurrency.lockutils [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:04 compute-0 nova_compute[189485]: 2025-11-29 15:51:04.131 189489 DEBUG oslo_concurrency.lockutils [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:04.347 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:04 compute-0 nova_compute[189485]: 2025-11-29 15:51:04.347 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:04.348 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:51:04 compute-0 podman[251883]: 2025-11-29 15:51:04.671764423 +0000 UTC m=+0.106589715 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true)
Nov 29 15:51:05 compute-0 nova_compute[189485]: 2025-11-29 15:51:05.622 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:06 compute-0 nova_compute[189485]: 2025-11-29 15:51:06.093 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:06 compute-0 nova_compute[189485]: 2025-11-29 15:51:06.799 189489 DEBUG nova.network.neutron [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:51:06 compute-0 nova_compute[189485]: 2025-11-29 15:51:06.908 189489 DEBUG nova.compute.manager [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:06 compute-0 nova_compute[189485]: 2025-11-29 15:51:06.909 189489 DEBUG nova.compute.manager [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing instance network info cache due to event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:51:06 compute-0 nova_compute[189485]: 2025-11-29 15:51:06.909 189489 DEBUG oslo_concurrency.lockutils [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:07 compute-0 podman[251902]: 2025-11-29 15:51:07.710053671 +0000 UTC m=+0.146885709 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:51:08 compute-0 nova_compute[189485]: 2025-11-29 15:51:08.848 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:09 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:09.351 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:09 compute-0 nova_compute[189485]: 2025-11-29 15:51:09.857 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.030 189489 DEBUG nova.network.neutron [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.063 189489 DEBUG oslo_concurrency.lockutils [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.064 189489 DEBUG nova.compute.manager [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.064 189489 DEBUG nova.compute.manager [None req-cd9e6e16-5d08-46ac-aeb6-4c1bafb8813d fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] network_info to inject: |[{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.067 189489 DEBUG oslo_concurrency.lockutils [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.067 189489 DEBUG nova.network.neutron [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:51:10 compute-0 nova_compute[189485]: 2025-11-29 15:51:10.625 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:11 compute-0 nova_compute[189485]: 2025-11-29 15:51:11.096 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:12 compute-0 nova_compute[189485]: 2025-11-29 15:51:12.302 189489 DEBUG nova.objects.instance [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lazy-loading 'flavor' on Instance uuid a8fbb028-7553-448d-8ee5-e0b34ade7315 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:12 compute-0 nova_compute[189485]: 2025-11-29 15:51:12.341 189489 DEBUG oslo_concurrency.lockutils [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:13 compute-0 nova_compute[189485]: 2025-11-29 15:51:13.091 189489 DEBUG nova.network.neutron [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updated VIF entry in instance network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:51:13 compute-0 nova_compute[189485]: 2025-11-29 15:51:13.092 189489 DEBUG nova.network.neutron [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:13 compute-0 nova_compute[189485]: 2025-11-29 15:51:13.111 189489 DEBUG oslo_concurrency.lockutils [req-c224cbdf-67c5-43cd-a232-6deb7ae0d127 req-6714755f-af04-4fbb-99ee-6d7821ba2c09 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:13 compute-0 nova_compute[189485]: 2025-11-29 15:51:13.113 189489 DEBUG oslo_concurrency.lockutils [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.444 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.446 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.471 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.546 189489 DEBUG nova.network.neutron [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.608 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.610 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.629 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.630 189489 INFO nova.compute.claims [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.682 189489 DEBUG nova.compute.manager [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.683 189489 DEBUG nova.compute.manager [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing instance network info cache due to event network-changed-6a066856-f7c0-4504-8a23-f8d966710ea5. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.683 189489 DEBUG oslo_concurrency.lockutils [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.791 189489 DEBUG nova.compute.provider_tree [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.805 189489 DEBUG nova.scheduler.client.report [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.831 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.831 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.890 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.890 189489 DEBUG nova.network.neutron [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.919 189489 INFO nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:51:14 compute-0 nova_compute[189485]: 2025-11-29 15:51:14.934 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.025 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.028 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.029 189489 INFO nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Creating image(s)#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.030 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.031 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.033 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.059 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.159 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.161 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.162 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.189 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.246 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.247 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.269 189489 DEBUG nova.policy [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': 'b31d88fdbdd24aa38b065d06114894f7', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5a2a25fd5988424f94cde619b09c8f11', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.293 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk 1073741824" returned: 0 in 0.046s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.294 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.294 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.394 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.397 189489 DEBUG nova.virt.disk.api [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Checking if we can resize image /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.399 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.490 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.493 189489 DEBUG nova.virt.disk.api [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Cannot resize image /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.493 189489 DEBUG nova.objects.instance [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lazy-loading 'migration_context' on Instance uuid 7006a15e-c744-447a-8a3f-98ba3a07b080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:15 compute-0 nova_compute[189485]: 2025-11-29 15:51:15.628 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:16 compute-0 nova_compute[189485]: 2025-11-29 15:51:16.099 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:16 compute-0 nova_compute[189485]: 2025-11-29 15:51:16.430 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:51:16 compute-0 nova_compute[189485]: 2025-11-29 15:51:16.431 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Ensure instance console log exists: /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:51:16 compute-0 nova_compute[189485]: 2025-11-29 15:51:16.432 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:16 compute-0 nova_compute[189485]: 2025-11-29 15:51:16.433 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:16 compute-0 nova_compute[189485]: 2025-11-29 15:51:16.434 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:17 compute-0 nova_compute[189485]: 2025-11-29 15:51:17.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:17 compute-0 nova_compute[189485]: 2025-11-29 15:51:17.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:51:17 compute-0 nova_compute[189485]: 2025-11-29 15:51:17.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:51:17 compute-0 nova_compute[189485]: 2025-11-29 15:51:17.529 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871#033[00m
Nov 29 15:51:17 compute-0 nova_compute[189485]: 2025-11-29 15:51:17.913 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.253 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.392 189489 DEBUG nova.network.neutron [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.563 189489 DEBUG oslo_concurrency.lockutils [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.563 189489 DEBUG nova.compute.manager [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Inject network info _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7144#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.563 189489 DEBUG nova.compute.manager [None req-c1f967ee-a905-4602-baee-01537bcdb22f fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] network_info to inject: |[{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _inject_network_info /usr/lib/python3.9/site-packages/nova/compute/manager.py:7145#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.567 189489 DEBUG oslo_concurrency.lockutils [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:18 compute-0 nova_compute[189485]: 2025-11-29 15:51:18.567 189489 DEBUG nova.network.neutron [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Refreshing network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:51:19 compute-0 nova_compute[189485]: 2025-11-29 15:51:19.778 189489 DEBUG nova.network.neutron [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Successfully created port: 026e3a29-d366-4753-b12d-f2910dbf0922 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.276 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.277 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.277 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.278 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.278 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.280 189489 INFO nova.compute.manager [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Terminating instance#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.281 189489 DEBUG nova.compute.manager [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:51:20 compute-0 kernel: tap6a066856-f7 (unregistering): left promiscuous mode
Nov 29 15:51:20 compute-0 NetworkManager[56360]: <info>  [1764431480.3407] device (tap6a066856-f7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.354 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:20 compute-0 ovn_controller[97827]: 2025-11-29T15:51:20Z|00101|binding|INFO|Releasing lport 6a066856-f7c0-4504-8a23-f8d966710ea5 from this chassis (sb_readonly=0)
Nov 29 15:51:20 compute-0 ovn_controller[97827]: 2025-11-29T15:51:20Z|00102|binding|INFO|Setting lport 6a066856-f7c0-4504-8a23-f8d966710ea5 down in Southbound
Nov 29 15:51:20 compute-0 ovn_controller[97827]: 2025-11-29T15:51:20Z|00103|binding|INFO|Removing iface tap6a066856-f7 ovn-installed in OVS
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.358 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.383 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:20 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Nov 29 15:51:20 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 39.097s CPU time.
Nov 29 15:51:20 compute-0 systemd-machined[155802]: Machine qemu-7-instance-00000007 terminated.
Nov 29 15:51:20 compute-0 podman[251942]: 2025-11-29 15:51:20.483160118 +0000 UTC m=+0.109566618 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.587 189489 INFO nova.virt.libvirt.driver [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Instance destroyed successfully.#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.588 189489 DEBUG nova.objects.instance [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lazy-loading 'resources' on Instance uuid a8fbb028-7553-448d-8ee5-e0b34ade7315 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:20.597 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:27:bf:aa 10.100.0.9'], port_security=['fa:16:3e:27:bf:aa 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': 'a8fbb028-7553-448d-8ee5-e0b34ade7315', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-4513a63b-8374-4327-8252-b3341ea0d01b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '31e7f8b8153d41ff92532e0affa83e06', 'neutron:revision_number': '6', 'neutron:security_group_ids': '604858cc-9311-4bea-9cbd-ecdfcdc76e2a', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.193'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=cd3bfea5-211e-4f33-8f36-c788a1fc59d7, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=6a066856-f7c0-4504-8a23-f8d966710ea5) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:20.599 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 6a066856-f7c0-4504-8a23-f8d966710ea5 in datapath 4513a63b-8374-4327-8252-b3341ea0d01b unbound from our chassis#033[00m
Nov 29 15:51:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:20.602 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 4513a63b-8374-4327-8252-b3341ea0d01b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:51:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:20.604 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[40674246-4bf8-45d0-9e51-19334ce13aee]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:20.605 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b namespace which is not needed anymore#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.631 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.699 189489 DEBUG nova.virt.libvirt.vif [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:50:01Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-1814984799',display_name='tempest-AttachInterfacesUnderV243Test-server-1814984799',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-1814984799',id=7,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBIxbIX6UWVvi623b2TPdtqR6dmeGyuJb/iUDGidiNkmGh2BwNaoWLgF60VYMySzUoNR4AOGsxFkCRSgQsaKINM96EWpBogdkfjelUHp1uk3e9r5r0s3ahvYCRtOL9cB4Xw==',key_name='tempest-keypair-64440635',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:19Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='31e7f8b8153d41ff92532e0affa83e06',ramdisk_id='',reservation_id='r-tz43hznh',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-AttachInterfacesUnderV243Test-1283287519',owner_user_name='tempest-AttachInterfacesUnderV243Test-1283287519-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:51:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='fc787028808a4f33ab230e0ce4fff83b',uuid=a8fbb028-7553-448d-8ee5-e0b34ade7315,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.700 189489 DEBUG nova.network.os_vif_util [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Converting VIF {"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.701 189489 DEBUG nova.network.os_vif_util [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.702 189489 DEBUG os_vif [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.704 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.705 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6a066856-f7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.708 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.712 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.715 189489 INFO os_vif [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:27:bf:aa,bridge_name='br-int',has_traffic_filtering=True,id=6a066856-f7c0-4504-8a23-f8d966710ea5,network=Network(4513a63b-8374-4327-8252-b3341ea0d01b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap6a066856-f7')#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.716 189489 INFO nova.virt.libvirt.driver [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Deleting instance files /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315_del#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.717 189489 INFO nova.virt.libvirt.driver [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Deletion of /var/lib/nova/instances/a8fbb028-7553-448d-8ee5-e0b34ade7315_del complete#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.802 189489 INFO nova.compute.manager [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Took 0.52 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.802 189489 DEBUG oslo.service.loopingcall [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.803 189489 DEBUG nova.compute.manager [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:51:20 compute-0 nova_compute[189485]: 2025-11-29 15:51:20.804 189489 DEBUG nova.network.neutron [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:51:20 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [NOTICE]   (251080) : haproxy version is 2.8.14-c23fe91
Nov 29 15:51:20 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [NOTICE]   (251080) : path to executable is /usr/sbin/haproxy
Nov 29 15:51:20 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [WARNING]  (251080) : Exiting Master process...
Nov 29 15:51:20 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [WARNING]  (251080) : Exiting Master process...
Nov 29 15:51:20 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [ALERT]    (251080) : Current worker (251082) exited with code 143 (Terminated)
Nov 29 15:51:20 compute-0 neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b[251076]: [WARNING]  (251080) : All workers exited. Exiting... (0)
Nov 29 15:51:20 compute-0 systemd[1]: libpod-f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67.scope: Deactivated successfully.
Nov 29 15:51:20 compute-0 podman[252007]: 2025-11-29 15:51:20.847619049 +0000 UTC m=+0.071065712 container died f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67-userdata-shm.mount: Deactivated successfully.
Nov 29 15:51:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-2ad0fa9ffa2cac37199275306747be993c6176e1742822ce7d59cb8280c3f946-merged.mount: Deactivated successfully.
Nov 29 15:51:20 compute-0 podman[252007]: 2025-11-29 15:51:20.9171811 +0000 UTC m=+0.140627803 container cleanup f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:51:20 compute-0 systemd[1]: libpod-conmon-f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67.scope: Deactivated successfully.
Nov 29 15:51:21 compute-0 podman[252035]: 2025-11-29 15:51:21.045034978 +0000 UTC m=+0.092216131 container remove f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.048 189489 DEBUG nova.compute.manager [req-c0facabc-3228-4b1a-8406-70b9a71fb117 req-4a1df8ff-dbf1-4f07-a0ef-5e3009314937 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-vif-unplugged-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.048 189489 DEBUG oslo_concurrency.lockutils [req-c0facabc-3228-4b1a-8406-70b9a71fb117 req-4a1df8ff-dbf1-4f07-a0ef-5e3009314937 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.049 189489 DEBUG oslo_concurrency.lockutils [req-c0facabc-3228-4b1a-8406-70b9a71fb117 req-4a1df8ff-dbf1-4f07-a0ef-5e3009314937 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.049 189489 DEBUG oslo_concurrency.lockutils [req-c0facabc-3228-4b1a-8406-70b9a71fb117 req-4a1df8ff-dbf1-4f07-a0ef-5e3009314937 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.050 189489 DEBUG nova.compute.manager [req-c0facabc-3228-4b1a-8406-70b9a71fb117 req-4a1df8ff-dbf1-4f07-a0ef-5e3009314937 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] No waiting events found dispatching network-vif-unplugged-6a066856-f7c0-4504-8a23-f8d966710ea5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.050 189489 DEBUG nova.compute.manager [req-c0facabc-3228-4b1a-8406-70b9a71fb117 req-4a1df8ff-dbf1-4f07-a0ef-5e3009314937 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-vif-unplugged-6a066856-f7c0-4504-8a23-f8d966710ea5 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.055 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[7708b3f6-7c6c-49ff-8497-d5d4c65b4bc1]: (4, ('Sat Nov 29 03:51:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b (f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67)\nf88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67\nSat Nov 29 03:51:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b (f88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67)\nf88c547844ddae81bb3b215bc02006f942a0bea914dfb4e2a9a97c78e01d0a67\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.058 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[670839bf-e0f7-4a4e-9229-2a27c287c54a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.060 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap4513a63b-80, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.063 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:21 compute-0 kernel: tap4513a63b-80: left promiscuous mode
Nov 29 15:51:21 compute-0 nova_compute[189485]: 2025-11-29 15:51:21.077 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.081 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[fd0d95c6-73cd-4996-a94e-358ed2d7723b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.100 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[be64d5a2-24f9-41d9-a195-4ce4313db546]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.102 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[277c2fc8-dfdf-4632-a261-f95d2fa84abf]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.125 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[4f710c33-2d2e-47fb-8464-961e6a105ca5]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 516790, 'reachable_time': 18110, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252048, 'error': None, 'target': 'ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d4513a63b\x2d8374\x2d4327\x2d8252\x2db3341ea0d01b.mount: Deactivated successfully.
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.130 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-4513a63b-8374-4327-8252-b3341ea0d01b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:51:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:21.130 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[44ede516-2e5e-4f00-ad31-5631b55d17b1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.205 189489 DEBUG nova.network.neutron [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updated VIF entry in instance network info cache for port 6a066856-f7c0-4504-8a23-f8d966710ea5. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.206 189489 DEBUG nova.network.neutron [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.193", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.226 189489 DEBUG oslo_concurrency.lockutils [req-8207572a-4f8f-43d5-bdce-f9f980403c17 req-13d93e9a-4576-40fb-928a-c243291d8c3f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.228 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.229 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.229 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid a8fbb028-7553-448d-8ee5-e0b34ade7315 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.260 189489 DEBUG nova.network.neutron [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Successfully updated port: 026e3a29-d366-4753-b12d-f2910dbf0922 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.404 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "refresh_cache-7006a15e-c744-447a-8a3f-98ba3a07b080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.404 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquired lock "refresh_cache-7006a15e-c744-447a-8a3f-98ba3a07b080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.404 189489 DEBUG nova.network.neutron [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.435 189489 DEBUG nova.compute.manager [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-changed-026e3a29-d366-4753-b12d-f2910dbf0922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.436 189489 DEBUG nova.compute.manager [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Refreshing instance network info cache due to event network-changed-026e3a29-d366-4753-b12d-f2910dbf0922. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.437 189489 DEBUG oslo_concurrency.lockutils [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-7006a15e-c744-447a-8a3f-98ba3a07b080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.677 189489 DEBUG nova.network.neutron [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.889 189489 DEBUG nova.network.neutron [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.925 189489 INFO nova.compute.manager [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Took 2.12 seconds to deallocate network for instance.#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.967 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:22 compute-0 nova_compute[189485]: 2025-11-29 15:51:22.968 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.137 189489 DEBUG nova.compute.provider_tree [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.158 189489 DEBUG nova.scheduler.client.report [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.174 189489 DEBUG nova.compute.manager [req-969e3965-3c33-4fc4-9a59-0a07c744f6da req-c3e67def-b696-41cd-9c6f-eca0a570d013 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.176 189489 DEBUG oslo_concurrency.lockutils [req-969e3965-3c33-4fc4-9a59-0a07c744f6da req-c3e67def-b696-41cd-9c6f-eca0a570d013 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.177 189489 DEBUG oslo_concurrency.lockutils [req-969e3965-3c33-4fc4-9a59-0a07c744f6da req-c3e67def-b696-41cd-9c6f-eca0a570d013 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.178 189489 DEBUG oslo_concurrency.lockutils [req-969e3965-3c33-4fc4-9a59-0a07c744f6da req-c3e67def-b696-41cd-9c6f-eca0a570d013 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.179 189489 DEBUG nova.compute.manager [req-969e3965-3c33-4fc4-9a59-0a07c744f6da req-c3e67def-b696-41cd-9c6f-eca0a570d013 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] No waiting events found dispatching network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.179 189489 WARNING nova.compute.manager [req-969e3965-3c33-4fc4-9a59-0a07c744f6da req-c3e67def-b696-41cd-9c6f-eca0a570d013 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received unexpected event network-vif-plugged-6a066856-f7c0-4504-8a23-f8d966710ea5 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.209 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.241s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.250 189489 INFO nova.scheduler.client.report [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Deleted allocations for instance a8fbb028-7553-448d-8ee5-e0b34ade7315#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.381 189489 DEBUG oslo_concurrency.lockutils [None req-f1b035c1-7cbb-495d-a01d-695efae154b6 fc787028808a4f33ab230e0ce4fff83b 31e7f8b8153d41ff92532e0affa83e06 - - default default] Lock "a8fbb028-7553-448d-8ee5-e0b34ade7315" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.792 189489 DEBUG nova.network.neutron [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Updating instance_info_cache with network_info: [{"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.814 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Releasing lock "refresh_cache-7006a15e-c744-447a-8a3f-98ba3a07b080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.814 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Instance network_info: |[{"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.814 189489 DEBUG oslo_concurrency.lockutils [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-7006a15e-c744-447a-8a3f-98ba3a07b080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.815 189489 DEBUG nova.network.neutron [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Refreshing network info cache for port 026e3a29-d366-4753-b12d-f2910dbf0922 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.817 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Start _get_guest_xml network_info=[{"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.825 189489 WARNING nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.835 189489 DEBUG nova.virt.libvirt.host [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.835 189489 DEBUG nova.virt.libvirt.host [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.842 189489 DEBUG nova.virt.libvirt.host [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.842 189489 DEBUG nova.virt.libvirt.host [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.843 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.843 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.844 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.844 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.844 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.844 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.845 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.845 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.845 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.846 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.846 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.846 189489 DEBUG nova.virt.hardware [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.849 189489 DEBUG nova.virt.libvirt.vif [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-130506979',display_name='tempest-ServerAddressesTestJSON-server-130506979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-130506979',id=10,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a2a25fd5988424f94cde619b09c8f11',ramdisk_id='',reservation_id='r-moaj8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-594409186',owner_user_name='tempest-ServerAddressesTestJSON-594409186-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:51:14Z,user_data=None,user_id='b31d88fdbdd24aa38b065d06114894f7',uuid=7006a15e-c744-447a-8a3f-98ba3a07b080,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.849 189489 DEBUG nova.network.os_vif_util [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Converting VIF {"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.850 189489 DEBUG nova.network.os_vif_util [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.851 189489 DEBUG nova.objects.instance [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7006a15e-c744-447a-8a3f-98ba3a07b080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.866 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <uuid>7006a15e-c744-447a-8a3f-98ba3a07b080</uuid>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <name>instance-0000000a</name>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:name>tempest-ServerAddressesTestJSON-server-130506979</nova:name>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:51:23</nova:creationTime>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:user uuid="b31d88fdbdd24aa38b065d06114894f7">tempest-ServerAddressesTestJSON-594409186-project-member</nova:user>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:project uuid="5a2a25fd5988424f94cde619b09c8f11">tempest-ServerAddressesTestJSON-594409186</nova:project>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        <nova:port uuid="026e3a29-d366-4753-b12d-f2910dbf0922">
Nov 29 15:51:23 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <system>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <entry name="serial">7006a15e-c744-447a-8a3f-98ba3a07b080</entry>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <entry name="uuid">7006a15e-c744-447a-8a3f-98ba3a07b080</entry>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </system>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <os>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </os>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <features>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </features>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.config"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:3d:5a:6e"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <target dev="tap026e3a29-d3"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/console.log" append="off"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <video>
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </video>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:51:23 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:51:23 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:51:23 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:51:23 compute-0 nova_compute[189485]: </domain>
Nov 29 15:51:23 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.868 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Preparing to wait for external event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.869 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.871 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.871 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.873 189489 DEBUG nova.virt.libvirt.vif [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-130506979',display_name='tempest-ServerAddressesTestJSON-server-130506979',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-130506979',id=10,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5a2a25fd5988424f94cde619b09c8f11',ramdisk_id='',reservation_id='r-moaj8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-594409186',owner_user_name='tempest-ServerAddressesTestJSON-594409186-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:51:14Z,user_data=None,user_id='b31d88fdbdd24aa38b065d06114894f7',uuid=7006a15e-c744-447a-8a3f-98ba3a07b080,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.874 189489 DEBUG nova.network.os_vif_util [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Converting VIF {"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.875 189489 DEBUG nova.network.os_vif_util [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.876 189489 DEBUG os_vif [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.877 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.878 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.879 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.883 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.884 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap026e3a29-d3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.885 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap026e3a29-d3, col_values=(('external_ids', {'iface-id': '026e3a29-d366-4753-b12d-f2910dbf0922', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:3d:5a:6e', 'vm-uuid': '7006a15e-c744-447a-8a3f-98ba3a07b080'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.888 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:23 compute-0 NetworkManager[56360]: <info>  [1764431483.8908] manager: (tap026e3a29-d3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/50)
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.891 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.900 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:23 compute-0 nova_compute[189485]: 2025-11-29 15:51:23.902 189489 INFO os_vif [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3')#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.008 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.009 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.009 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] No VIF found with MAC fa:16:3e:3d:5a:6e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.010 189489 INFO nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Using config drive#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.249 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updating instance_info_cache with network_info: [{"id": "6a066856-f7c0-4504-8a23-f8d966710ea5", "address": "fa:16:3e:27:bf:aa", "network": {"id": "4513a63b-8374-4327-8252-b3341ea0d01b", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-272395306-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "31e7f8b8153d41ff92532e0affa83e06", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6a066856-f7", "ovs_interfaceid": "6a066856-f7c0-4504-8a23-f8d966710ea5", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.273 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-a8fbb028-7553-448d-8ee5-e0b34ade7315" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.274 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.274 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.275 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.275 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.275 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.512 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.512 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.512 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.513 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.621 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.717 189489 DEBUG nova.compute.manager [req-ebee2fe5-18cf-439e-93c4-61f2e3e0ae01 req-fda513ee-dcac-43f3-aa04-f02e5dcba6a2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Received event network-vif-deleted-6a066856-f7c0-4504-8a23-f8d966710ea5 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.726 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.727 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.844 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.853 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.930 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:24 compute-0 nova_compute[189485]: 2025-11-29 15:51:24.931 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.002 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.005 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Periodic task is updating the host stat, it is trying to get disk instance-0000000a, but disk file was removed by concurrent operations such as resize.: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.config'#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.047 189489 INFO nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Creating config drive at /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.config#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.054 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1edifzw execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.202 189489 DEBUG oslo_concurrency.processutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpk1edifzw" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:25 compute-0 kernel: tap026e3a29-d3: entered promiscuous mode
Nov 29 15:51:25 compute-0 NetworkManager[56360]: <info>  [1764431485.2964] manager: (tap026e3a29-d3): new Tun device (/org/freedesktop/NetworkManager/Devices/51)
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.295 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:25 compute-0 ovn_controller[97827]: 2025-11-29T15:51:25Z|00104|binding|INFO|Claiming lport 026e3a29-d366-4753-b12d-f2910dbf0922 for this chassis.
Nov 29 15:51:25 compute-0 ovn_controller[97827]: 2025-11-29T15:51:25Z|00105|binding|INFO|026e3a29-d366-4753-b12d-f2910dbf0922: Claiming fa:16:3e:3d:5a:6e 10.100.0.12
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.303 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:5a:6e 10.100.0.12'], port_security=['fa:16:3e:3d:5a:6e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7006a15e-c744-447a-8a3f-98ba3a07b080', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e69448a-aa26-4336-ba73-7967d1aa0093', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a2a25fd5988424f94cde619b09c8f11', 'neutron:revision_number': '2', 'neutron:security_group_ids': '25cd3a8c-300e-4617-b51c-03954717186c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3880ced-0ccf-41f6-8f2c-0d9948b36e6c, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=026e3a29-d366-4753-b12d-f2910dbf0922) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.304 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 026e3a29-d366-4753-b12d-f2910dbf0922 in datapath 5e69448a-aa26-4336-ba73-7967d1aa0093 bound to our chassis#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.306 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 5e69448a-aa26-4336-ba73-7967d1aa0093#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.319 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[762e8d4c-caee-4bc1-97d3-53f19eaa10c0]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.320 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap5e69448a-a1 in ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:51:25 compute-0 ovn_controller[97827]: 2025-11-29T15:51:25Z|00106|binding|INFO|Setting lport 026e3a29-d366-4753-b12d-f2910dbf0922 ovn-installed in OVS
Nov 29 15:51:25 compute-0 ovn_controller[97827]: 2025-11-29T15:51:25Z|00107|binding|INFO|Setting lport 026e3a29-d366-4753-b12d-f2910dbf0922 up in Southbound
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.322 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.322 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap5e69448a-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.322 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[55b9038e-41a1-456c-a529-9fda6f0652d7]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.324 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[208777b2-bd02-4df7-b3ef-5ab93d4ebffd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.334 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[85c7ff03-74bf-46cd-870d-43ad4960806f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 systemd-machined[155802]: New machine qemu-10-instance-0000000a.
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.361 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ceae223d-7e38-4f02-9121-5a81b79dffb4]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Nov 29 15:51:25 compute-0 systemd-udevd[252086]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:51:25 compute-0 NetworkManager[56360]: <info>  [1764431485.3888] device (tap026e3a29-d3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:51:25 compute-0 NetworkManager[56360]: <info>  [1764431485.3896] device (tap026e3a29-d3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.393 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[27651059-6d07-4aad-9d7f-c48cd85d73f0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 NetworkManager[56360]: <info>  [1764431485.4013] manager: (tap5e69448a-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/52)
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.400 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8183c487-7c8c-499f-b498-43881ffe668e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.436 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8ef4b1-fa3b-43d5-b475-7409aa040cce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.439 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[caf13cc9-6a3f-497f-a05d-e63af1c656f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 NetworkManager[56360]: <info>  [1764431485.4619] device (tap5e69448a-a0): carrier: link connected
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.470 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[54fe6bcc-6cc5-4512-b3e2-cb03e55f4fe8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.485 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[119a01f3-5cf1-43f7-afd1-cdb69741508b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e69448a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:e3:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524111, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252115, 'error': None, 'target': 'ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.500 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[064107e7-472d-4ee4-9ad2-dda43ed154cb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0e:e3bd'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 524111, 'tstamp': 524111}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252116, 'error': None, 'target': 'ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.516 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f54bf298-0594-413b-be5a-c9e7e87f47ef]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap5e69448a-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0e:e3:bd'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 32], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524111, 'reachable_time': 30576, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252117, 'error': None, 'target': 'ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.555 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9d2ed35a-e15d-4b21-9a62-faed7a02329a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.584 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.586 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5174MB free_disk=72.31175231933594GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.587 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.587 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.625 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6af5e10c-0267-4dcf-ba05-70443bf47725]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.626 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e69448a-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.626 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.627 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap5e69448a-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:25 compute-0 NetworkManager[56360]: <info>  [1764431485.6294] manager: (tap5e69448a-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/53)
Nov 29 15:51:25 compute-0 kernel: tap5e69448a-a0: entered promiscuous mode
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.631 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.632 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap5e69448a-a0, col_values=(('external_ids', {'iface-id': 'd9cbf719-7417-45df-a78c-90f22682dcfa'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.634 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:25 compute-0 ovn_controller[97827]: 2025-11-29T15:51:25Z|00108|binding|INFO|Releasing lport d9cbf719-7417-45df-a78c-90f22682dcfa from this chassis (sb_readonly=0)
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.636 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.636 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/5e69448a-aa26-4336-ba73-7967d1aa0093.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/5e69448a-aa26-4336-ba73-7967d1aa0093.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.637 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[baa53de1-e1a0-4b46-9991-b2b87e3523aa]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.639 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-5e69448a-aa26-4336-ba73-7967d1aa0093
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/5e69448a-aa26-4336-ba73-7967d1aa0093.pid.haproxy
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 5e69448a-aa26-4336-ba73-7967d1aa0093
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:51:25 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:25.640 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093', 'env', 'PROCESS_TAG=haproxy-5e69448a-aa26-4336-ba73-7967d1aa0093', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/5e69448a-aa26-4336-ba73-7967d1aa0093.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.647 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.654 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431485.6531193, 7006a15e-c744-447a-8a3f-98ba3a07b080 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.654 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] VM Started (Lifecycle Event)#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.695 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance ea685573-5d12-4d41-8c8d-1d73dc63399d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.696 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 7006a15e-c744-447a-8a3f-98ba3a07b080 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.696 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.697 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.702 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.707 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431485.6533198, 7006a15e-c744-447a-8a3f-98ba3a07b080 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.708 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.734 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.741 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.768 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.796 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.813 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.838 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:51:25 compute-0 nova_compute[189485]: 2025-11-29 15:51:25.838 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.005 189489 DEBUG nova.network.neutron [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Updated VIF entry in instance network info cache for port 026e3a29-d366-4753-b12d-f2910dbf0922. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.007 189489 DEBUG nova.network.neutron [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Updating instance_info_cache with network_info: [{"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.038 189489 DEBUG oslo_concurrency.lockutils [req-37217029-7833-48f7-ad0a-fa75ae8ad32e req-fd17f8c5-b5fe-4a82-9b9a-081245c76320 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-7006a15e-c744-447a-8a3f-98ba3a07b080" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:26 compute-0 podman[252156]: 2025-11-29 15:51:26.081812192 +0000 UTC m=+0.065636526 container create 42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:26 compute-0 podman[252156]: 2025-11-29 15:51:26.048278771 +0000 UTC m=+0.032103145 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:51:26 compute-0 systemd[1]: Started libpod-conmon-42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af.scope.
Nov 29 15:51:26 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:51:26 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/733f5eee5e847190fd0d7ba4a273e8463e0b4dd9dfd380311eb2694e79323c5f/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:51:26 compute-0 podman[252156]: 2025-11-29 15:51:26.208292185 +0000 UTC m=+0.192116529 container init 42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 15:51:26 compute-0 podman[252156]: 2025-11-29 15:51:26.217474621 +0000 UTC m=+0.201298945 container start 42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:51:26 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [NOTICE]   (252175) : New worker (252177) forked
Nov 29 15:51:26 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [NOTICE]   (252175) : Loading success.
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.445 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.837 189489 DEBUG nova.compute.manager [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.838 189489 DEBUG oslo_concurrency.lockutils [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.839 189489 DEBUG oslo_concurrency.lockutils [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.840 189489 DEBUG oslo_concurrency.lockutils [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.841 189489 DEBUG nova.compute.manager [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Processing event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.842 189489 DEBUG nova.compute.manager [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.843 189489 DEBUG oslo_concurrency.lockutils [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.844 189489 DEBUG oslo_concurrency.lockutils [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.845 189489 DEBUG oslo_concurrency.lockutils [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.845 189489 DEBUG nova.compute.manager [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] No waiting events found dispatching network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.846 189489 WARNING nova.compute.manager [req-e3663cfa-27c2-4208-b016-a491fd8f025b req-3b5180f9-50a7-4f0e-aa8b-4afa9a05560f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received unexpected event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 for instance with vm_state building and task_state spawning.#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.847 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.849 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.857 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431486.8562639, 7006a15e-c744-447a-8a3f-98ba3a07b080 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.858 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.861 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.870 189489 INFO nova.virt.libvirt.driver [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Instance spawned successfully.#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.871 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.878 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.895 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.905 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.906 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.908 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.910 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.911 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.913 189489 DEBUG nova.virt.libvirt.driver [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.921 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.969 189489 INFO nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Took 11.94 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:51:26 compute-0 nova_compute[189485]: 2025-11-29 15:51:26.970 189489 DEBUG nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:27 compute-0 nova_compute[189485]: 2025-11-29 15:51:27.030 189489 INFO nova.compute.manager [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Took 12.47 seconds to build instance.#033[00m
Nov 29 15:51:27 compute-0 nova_compute[189485]: 2025-11-29 15:51:27.047 189489 DEBUG oslo_concurrency.lockutils [None req-e112c01c-694f-4433-8d8c-954b37a7a6d8 b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.601s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:28 compute-0 nova_compute[189485]: 2025-11-29 15:51:28.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:28 compute-0 nova_compute[189485]: 2025-11-29 15:51:28.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:51:28 compute-0 podman[252186]: 2025-11-29 15:51:28.635236233 +0000 UTC m=+0.087360140 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 15:51:28 compute-0 nova_compute[189485]: 2025-11-29 15:51:28.888 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:29 compute-0 podman[203677]: time="2025-11-29T15:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:51:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:51:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5249 "" "Go-http-client/1.1"
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.637 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.704 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.706 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.706 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.707 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.709 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.711 189489 INFO nova.compute.manager [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Terminating instance#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.713 189489 DEBUG nova.compute.manager [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.762 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:30 compute-0 kernel: tap026e3a29-d3 (unregistering): left promiscuous mode
Nov 29 15:51:30 compute-0 NetworkManager[56360]: <info>  [1764431490.7703] device (tap026e3a29-d3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:51:30 compute-0 ovn_controller[97827]: 2025-11-29T15:51:30Z|00109|binding|INFO|Releasing lport 026e3a29-d366-4753-b12d-f2910dbf0922 from this chassis (sb_readonly=0)
Nov 29 15:51:30 compute-0 ovn_controller[97827]: 2025-11-29T15:51:30Z|00110|binding|INFO|Setting lport 026e3a29-d366-4753-b12d-f2910dbf0922 down in Southbound
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.783 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:30 compute-0 ovn_controller[97827]: 2025-11-29T15:51:30Z|00111|binding|INFO|Removing iface tap026e3a29-d3 ovn-installed in OVS
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.789 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:30.790 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:3d:5a:6e 10.100.0.12'], port_security=['fa:16:3e:3d:5a:6e 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '7006a15e-c744-447a-8a3f-98ba3a07b080', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-5e69448a-aa26-4336-ba73-7967d1aa0093', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5a2a25fd5988424f94cde619b09c8f11', 'neutron:revision_number': '4', 'neutron:security_group_ids': '25cd3a8c-300e-4617-b51c-03954717186c', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=b3880ced-0ccf-41f6-8f2c-0d9948b36e6c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=026e3a29-d366-4753-b12d-f2910dbf0922) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:30.791 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 026e3a29-d366-4753-b12d-f2910dbf0922 in datapath 5e69448a-aa26-4336-ba73-7967d1aa0093 unbound from our chassis#033[00m
Nov 29 15:51:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:30.793 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 5e69448a-aa26-4336-ba73-7967d1aa0093, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:51:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:30.794 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[01e7b9d8-90cf-496b-8d4f-ed763d99320d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:30 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:30.795 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093 namespace which is not needed anymore#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.806 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:30 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Nov 29 15:51:30 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 4.399s CPU time.
Nov 29 15:51:30 compute-0 systemd-machined[155802]: Machine qemu-10-instance-0000000a terminated.
Nov 29 15:51:30 compute-0 podman[252205]: 2025-11-29 15:51:30.913378429 +0000 UTC m=+0.106830073 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, io.buildah.version=1.29.0, vcs-type=git, architecture=x86_64, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, container_name=kepler, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:51:30 compute-0 podman[252209]: 2025-11-29 15:51:30.913570734 +0000 UTC m=+0.111330504 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ceilometer_agent_ipmi)
Nov 29 15:51:30 compute-0 podman[252208]: 2025-11-29 15:51:30.922319209 +0000 UTC m=+0.121623581 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:30 compute-0 podman[252211]: 2025-11-29 15:51:30.943419378 +0000 UTC m=+0.114418608 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, container_name=openstack_network_exporter, version=9.6, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 29 15:51:30 compute-0 podman[252210]: 2025-11-29 15:51:30.968164263 +0000 UTC m=+0.151481615 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.997 189489 INFO nova.virt.libvirt.driver [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Instance destroyed successfully.#033[00m
Nov 29 15:51:30 compute-0 nova_compute[189485]: 2025-11-29 15:51:30.998 189489 DEBUG nova.objects.instance [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lazy-loading 'resources' on Instance uuid 7006a15e-c744-447a-8a3f-98ba3a07b080 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:31 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [NOTICE]   (252175) : haproxy version is 2.8.14-c23fe91
Nov 29 15:51:31 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [NOTICE]   (252175) : path to executable is /usr/sbin/haproxy
Nov 29 15:51:31 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [WARNING]  (252175) : Exiting Master process...
Nov 29 15:51:31 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [ALERT]    (252175) : Current worker (252177) exited with code 143 (Terminated)
Nov 29 15:51:31 compute-0 neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093[252171]: [WARNING]  (252175) : All workers exited. Exiting... (0)
Nov 29 15:51:31 compute-0 systemd[1]: libpod-42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af.scope: Deactivated successfully.
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.020 189489 DEBUG nova.virt.libvirt.vif [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:51:13Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-130506979',display_name='tempest-ServerAddressesTestJSON-server-130506979',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-130506979',id=10,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:51:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5a2a25fd5988424f94cde619b09c8f11',ramdisk_id='',reservation_id='r-moaj8b6g',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-594409186',owner_user_name='tempest-ServerAddressesTestJSON-594409186-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:51:27Z,user_data=None,user_id='b31d88fdbdd24aa38b065d06114894f7',uuid=7006a15e-c744-447a-8a3f-98ba3a07b080,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.020 189489 DEBUG nova.network.os_vif_util [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Converting VIF {"id": "026e3a29-d366-4753-b12d-f2910dbf0922", "address": "fa:16:3e:3d:5a:6e", "network": {"id": "5e69448a-aa26-4336-ba73-7967d1aa0093", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-611362408-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5a2a25fd5988424f94cde619b09c8f11", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap026e3a29-d3", "ovs_interfaceid": "026e3a29-d366-4753-b12d-f2910dbf0922", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:31 compute-0 podman[252324]: 2025-11-29 15:51:31.021151298 +0000 UTC m=+0.064047464 container died 42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.021 189489 DEBUG nova.network.os_vif_util [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.021 189489 DEBUG os_vif [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.022 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.023 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap026e3a29-d3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.024 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.029 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.031 189489 INFO os_vif [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:3d:5a:6e,bridge_name='br-int',has_traffic_filtering=True,id=026e3a29-d366-4753-b12d-f2910dbf0922,network=Network(5e69448a-aa26-4336-ba73-7967d1aa0093),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap026e3a29-d3')#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.031 189489 INFO nova.virt.libvirt.driver [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Deleting instance files /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080_del#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.032 189489 INFO nova.virt.libvirt.driver [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Deletion of /var/lib/nova/instances/7006a15e-c744-447a-8a3f-98ba3a07b080_del complete#033[00m
Nov 29 15:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af-userdata-shm.mount: Deactivated successfully.
Nov 29 15:51:31 compute-0 systemd[1]: var-lib-containers-storage-overlay-733f5eee5e847190fd0d7ba4a273e8463e0b4dd9dfd380311eb2694e79323c5f-merged.mount: Deactivated successfully.
Nov 29 15:51:31 compute-0 podman[252324]: 2025-11-29 15:51:31.067295759 +0000 UTC m=+0.110191925 container cleanup 42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 15:51:31 compute-0 systemd[1]: libpod-conmon-42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af.scope: Deactivated successfully.
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.100 189489 INFO nova.compute.manager [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.101 189489 DEBUG oslo.service.loopingcall [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.102 189489 DEBUG nova.compute.manager [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.103 189489 DEBUG nova.network.neutron [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.105 189489 DEBUG nova.compute.manager [req-0c75e696-4bb3-4a9a-8220-ff407c882e18 req-f8a4d6a2-4570-46da-8641-8420364058cb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-vif-unplugged-026e3a29-d366-4753-b12d-f2910dbf0922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.105 189489 DEBUG oslo_concurrency.lockutils [req-0c75e696-4bb3-4a9a-8220-ff407c882e18 req-f8a4d6a2-4570-46da-8641-8420364058cb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.106 189489 DEBUG oslo_concurrency.lockutils [req-0c75e696-4bb3-4a9a-8220-ff407c882e18 req-f8a4d6a2-4570-46da-8641-8420364058cb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.106 189489 DEBUG oslo_concurrency.lockutils [req-0c75e696-4bb3-4a9a-8220-ff407c882e18 req-f8a4d6a2-4570-46da-8641-8420364058cb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.106 189489 DEBUG nova.compute.manager [req-0c75e696-4bb3-4a9a-8220-ff407c882e18 req-f8a4d6a2-4570-46da-8641-8420364058cb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] No waiting events found dispatching network-vif-unplugged-026e3a29-d366-4753-b12d-f2910dbf0922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.107 189489 DEBUG nova.compute.manager [req-0c75e696-4bb3-4a9a-8220-ff407c882e18 req-f8a4d6a2-4570-46da-8641-8420364058cb 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-vif-unplugged-026e3a29-d366-4753-b12d-f2910dbf0922 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:51:31 compute-0 podman[252373]: 2025-11-29 15:51:31.145081641 +0000 UTC m=+0.050322444 container remove 42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.156 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[bebf0963-2fb8-46ca-894f-68892e8f9e06]: (4, ('Sat Nov 29 03:51:30 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093 (42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af)\n42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af\nSat Nov 29 03:51:31 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093 (42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af)\n42376cf79e66cbc4c9c2ec564cee75729f4055d117e713d669bcf193ff0f71af\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.159 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[e0466911-c5ea-4b8c-9c18-0d4624271bc3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.161 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap5e69448a-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.163 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:31 compute-0 kernel: tap5e69448a-a0: left promiscuous mode
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.168 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.172 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[074c95a8-1604-4244-b568-16e3ff1c20b5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 nova_compute[189485]: 2025-11-29 15:51:31.183 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.188 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f6f5048b-d1d2-4f97-b35b-36de83662018]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.190 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[0ff64ccf-a1ec-4b2f-a053-750d2bcc3e57]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.208 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9bc427dd-63d4-49f1-b065-ef36e0a089fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 524103, 'reachable_time': 37657, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252388, 'error': None, 'target': 'ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.211 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-5e69448a-aa26-4336-ba73-7967d1aa0093 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:51:31 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:31.212 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[2b05b18a-0b71-4563-b467-9607e51ecf67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:31 compute-0 systemd[1]: run-netns-ovnmeta\x2d5e69448a\x2daa26\x2d4336\x2dba73\x2d7967d1aa0093.mount: Deactivated successfully.
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: ERROR   15:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: ERROR   15:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: ERROR   15:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: ERROR   15:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:51:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.247 189489 DEBUG nova.network.neutron [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.271 189489 INFO nova.compute.manager [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Took 1.17 seconds to deallocate network for instance.#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.325 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.326 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.486 189489 DEBUG nova.compute.provider_tree [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.515 189489 DEBUG nova.scheduler.client.report [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.547 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.590 189489 INFO nova.scheduler.client.report [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Deleted allocations for instance 7006a15e-c744-447a-8a3f-98ba3a07b080#033[00m
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.670 189489 DEBUG oslo_concurrency.lockutils [None req-45501432-adea-441c-b121-207bc2e2764e b31d88fdbdd24aa38b065d06114894f7 5a2a25fd5988424f94cde619b09c8f11 - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.965s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:32 compute-0 ovn_controller[97827]: 2025-11-29T15:51:32Z|00112|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:51:32 compute-0 nova_compute[189485]: 2025-11-29 15:51:32.988 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.056 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.245 189489 DEBUG nova.compute.manager [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.245 189489 DEBUG oslo_concurrency.lockutils [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.246 189489 DEBUG oslo_concurrency.lockutils [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.246 189489 DEBUG oslo_concurrency.lockutils [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "7006a15e-c744-447a-8a3f-98ba3a07b080-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.246 189489 DEBUG nova.compute.manager [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] No waiting events found dispatching network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.247 189489 WARNING nova.compute.manager [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received unexpected event network-vif-plugged-026e3a29-d366-4753-b12d-f2910dbf0922 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:51:33 compute-0 nova_compute[189485]: 2025-11-29 15:51:33.258 189489 DEBUG nova.compute.manager [req-823e47cd-a5e4-4aa9-98ac-90aa6cc10a7c req-638fb19f-2f6c-48f2-93cf-6c47ea7a3b34 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Received event network-vif-deleted-026e3a29-d366-4753-b12d-f2910dbf0922 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:35 compute-0 ovn_controller[97827]: 2025-11-29T15:51:35Z|00113|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:51:35 compute-0 nova_compute[189485]: 2025-11-29 15:51:35.456 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:35 compute-0 nova_compute[189485]: 2025-11-29 15:51:35.583 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431480.5812373, a8fbb028-7553-448d-8ee5-e0b34ade7315 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:35 compute-0 nova_compute[189485]: 2025-11-29 15:51:35.583 189489 INFO nova.compute.manager [-] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:51:35 compute-0 nova_compute[189485]: 2025-11-29 15:51:35.611 189489 DEBUG nova.compute.manager [None req-2d760d83-323b-49a9-9ed7-a64693c36c8b - - - - - -] [instance: a8fbb028-7553-448d-8ee5-e0b34ade7315] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:35 compute-0 nova_compute[189485]: 2025-11-29 15:51:35.639 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:35 compute-0 podman[252389]: 2025-11-29 15:51:35.696526565 +0000 UTC m=+0.127976843 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:51:36 compute-0 nova_compute[189485]: 2025-11-29 15:51:36.026 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:36 compute-0 nova_compute[189485]: 2025-11-29 15:51:36.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:51:36 compute-0 nova_compute[189485]: 2025-11-29 15:51:36.993 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:38 compute-0 podman[252406]: 2025-11-29 15:51:38.625835962 +0000 UTC m=+0.071656968 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:51:40 compute-0 ovn_controller[97827]: 2025-11-29T15:51:40Z|00114|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:51:40 compute-0 nova_compute[189485]: 2025-11-29 15:51:40.558 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:40 compute-0 nova_compute[189485]: 2025-11-29 15:51:40.644 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:41 compute-0 nova_compute[189485]: 2025-11-29 15:51:41.031 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:42 compute-0 nova_compute[189485]: 2025-11-29 15:51:42.081 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:43 compute-0 nova_compute[189485]: 2025-11-29 15:51:43.333 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:43 compute-0 nova_compute[189485]: 2025-11-29 15:51:43.334 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:43 compute-0 nova_compute[189485]: 2025-11-29 15:51:43.335 189489 INFO nova.compute.manager [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Rebooting instance#033[00m
Nov 29 15:51:43 compute-0 nova_compute[189485]: 2025-11-29 15:51:43.354 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:43 compute-0 nova_compute[189485]: 2025-11-29 15:51:43.355 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquired lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:43 compute-0 nova_compute[189485]: 2025-11-29 15:51:43.355 189489 DEBUG nova.network.neutron [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:51:45 compute-0 nova_compute[189485]: 2025-11-29 15:51:45.649 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:45 compute-0 nova_compute[189485]: 2025-11-29 15:51:45.995 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431490.9935956, 7006a15e-c744-447a-8a3f-98ba3a07b080 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:45 compute-0 nova_compute[189485]: 2025-11-29 15:51:45.995 189489 INFO nova.compute.manager [-] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.019 189489 DEBUG nova.compute.manager [None req-da58107f-e66b-43d5-8aa5-72b5938fd80c - - - - - -] [instance: 7006a15e-c744-447a-8a3f-98ba3a07b080] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.034 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.095 189489 DEBUG nova.network.neutron [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updating instance_info_cache with network_info: [{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.110 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Releasing lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.112 189489 DEBUG nova.compute.manager [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:46 compute-0 kernel: tap471b576d-ab (unregistering): left promiscuous mode
Nov 29 15:51:46 compute-0 NetworkManager[56360]: <info>  [1764431506.2562] device (tap471b576d-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:51:46 compute-0 ovn_controller[97827]: 2025-11-29T15:51:46Z|00115|binding|INFO|Releasing lport 471b576d-abd9-4813-915c-33fdffb4ae94 from this chassis (sb_readonly=0)
Nov 29 15:51:46 compute-0 ovn_controller[97827]: 2025-11-29T15:51:46Z|00116|binding|INFO|Setting lport 471b576d-abd9-4813-915c-33fdffb4ae94 down in Southbound
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.272 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 ovn_controller[97827]: 2025-11-29T15:51:46Z|00117|binding|INFO|Removing iface tap471b576d-ab ovn-installed in OVS
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.274 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.282 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:50:d3 10.100.0.11'], port_security=['fa:16:3e:b8:50:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ea685573-5d12-4d41-8c8d-1d73dc63399d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79e3732a895b43ce86538671ea9e7670', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8e2a464-eef4-4c41-a809-d94caef28d98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02d3693f-5198-43ab-859b-ff500142407c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=471b576d-abd9-4813-915c-33fdffb4ae94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.283 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 471b576d-abd9-4813-915c-33fdffb4ae94 in datapath 29b0dade-4512-451e-9fdc-1b8d13fd5972 unbound from our chassis#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.285 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29b0dade-4512-451e-9fdc-1b8d13fd5972, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.285 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[acfe2ce8-1462-4b14-8fbd-13eebd09dc3d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.286 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 namespace which is not needed anymore#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.307 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 15:51:46 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 43.702s CPU time.
Nov 29 15:51:46 compute-0 systemd-machined[155802]: Machine qemu-9-instance-00000009 terminated.
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.411 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [NOTICE]   (251367) : haproxy version is 2.8.14-c23fe91
Nov 29 15:51:46 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [NOTICE]   (251367) : path to executable is /usr/sbin/haproxy
Nov 29 15:51:46 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [WARNING]  (251367) : Exiting Master process...
Nov 29 15:51:46 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [WARNING]  (251367) : Exiting Master process...
Nov 29 15:51:46 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [ALERT]    (251367) : Current worker (251369) exited with code 143 (Terminated)
Nov 29 15:51:46 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[251362]: [WARNING]  (251367) : All workers exited. Exiting... (0)
Nov 29 15:51:46 compute-0 systemd[1]: libpod-5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca.scope: Deactivated successfully.
Nov 29 15:51:46 compute-0 podman[252454]: 2025-11-29 15:51:46.450517604 +0000 UTC m=+0.066873030 container died 5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.490 189489 INFO nova.virt.libvirt.driver [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance destroyed successfully.#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.491 189489 DEBUG nova.objects.instance [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'resources' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca-userdata-shm.mount: Deactivated successfully.
Nov 29 15:51:46 compute-0 systemd[1]: var-lib-containers-storage-overlay-57cd72c40a2623cd9019fa8c8e3bb08afffd1707aead34290bcca445d1a5d026-merged.mount: Deactivated successfully.
Nov 29 15:51:46 compute-0 podman[252454]: 2025-11-29 15:51:46.504888386 +0000 UTC m=+0.121243812 container cleanup 5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2)
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.506 189489 DEBUG nova.virt.libvirt.vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:50:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-153023418',display_name='tempest-ServerActionsTestJSON-server-153023418',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-153023418',id=9,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHe84/Vw1/UE6MjH9hSoZ8S+lF+m9Cdu9Av7vTw88OmQpmBt5taKTJ/r+cWSkzwOPRZEvDuFb+SsqaHgLTHP3NrHdnllgdosFCEIeqEnWDvyEA3QKG1liQQzPUp2/9l1bw==',key_name='tempest-keypair-106632266',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79e3732a895b43ce86538671ea9e7670',ramdisk_id='',reservation_id='r-7ix6aam2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1517137287',owner_user_name='tempest-ServerActionsTestJSON-1517137287-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:51:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b595faab5dfa4b4e9aff6a34b1473172',uuid=ea685573-5d12-4d41-8c8d-1d73dc63399d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.507 189489 DEBUG nova.network.os_vif_util [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converting VIF {"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.510 189489 DEBUG nova.network.os_vif_util [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.511 189489 DEBUG os_vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.513 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.514 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap471b576d-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.517 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.519 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:51:46 compute-0 systemd[1]: libpod-conmon-5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca.scope: Deactivated successfully.
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.521 189489 INFO os_vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab')#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.529 189489 DEBUG nova.virt.libvirt.driver [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Start _get_guest_xml network_info=[{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.534 189489 WARNING nova.virt.libvirt.driver [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.546 189489 DEBUG nova.virt.libvirt.host [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.547 189489 DEBUG nova.virt.libvirt.host [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.552 189489 DEBUG nova.virt.libvirt.host [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.553 189489 DEBUG nova.virt.libvirt.host [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.553 189489 DEBUG nova.virt.libvirt.driver [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.554 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.554 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.555 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.555 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.555 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.556 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.556 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.556 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.557 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.557 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.557 189489 DEBUG nova.virt.hardware [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.558 189489 DEBUG nova.objects.instance [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'vcpu_model' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.581 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:46 compute-0 podman[252501]: 2025-11-29 15:51:46.601204506 +0000 UTC m=+0.062140252 container remove 5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.614 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[5f2145cc-6175-46e6-8b48-09002af700bb]: (4, ('Sat Nov 29 03:51:46 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 (5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca)\n5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca\nSat Nov 29 03:51:46 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 (5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca)\n5dcf23fb7f05ef325972c5f370682f1e2e80ed5561fd6d12551449e6ccadcdca\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.615 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d6ab1bcc-1d6f-4d78-bb83-f2f0cb4bd47b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.616 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b0dade-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.618 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 kernel: tap29b0dade-40: left promiscuous mode
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.634 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.637 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[54ec5b0f-17f2-4ff5-81f4-3d273e9e2d58]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.656 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[bd409df1-bf2f-428d-9a0b-a9bd8431f145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.657 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[53df4cec-8156-4738-b5a0-7c9662a139f2]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.667 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.669 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.670 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.671 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[bb829abe-7570-4f7e-a040-7c93793c0905]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 517548, 'reachable_time': 23015, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252517, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.671 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.673 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:51:46 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:46.673 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[744c455f-3e4d-418b-a797-7191e324dfd6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:46 compute-0 systemd[1]: run-netns-ovnmeta\x2d29b0dade\x2d4512\x2d451e\x2d9fdc\x2d1b8d13fd5972.mount: Deactivated successfully.
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.674 189489 DEBUG nova.virt.libvirt.vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:50:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-153023418',display_name='tempest-ServerActionsTestJSON-server-153023418',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-153023418',id=9,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHe84/Vw1/UE6MjH9hSoZ8S+lF+m9Cdu9Av7vTw88OmQpmBt5taKTJ/r+cWSkzwOPRZEvDuFb+SsqaHgLTHP3NrHdnllgdosFCEIeqEnWDvyEA3QKG1liQQzPUp2/9l1bw==',key_name='tempest-keypair-106632266',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79e3732a895b43ce86538671ea9e7670',ramdisk_id='',reservation_id='r-7ix6aam2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1517137287',owner_user_name='tempest-ServerActionsTestJSON-1517137287-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:51:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b595faab5dfa4b4e9aff6a34b1473172',uuid=ea685573-5d12-4d41-8c8d-1d73dc63399d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.676 189489 DEBUG nova.network.os_vif_util [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converting VIF {"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.678 189489 DEBUG nova.network.os_vif_util [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.679 189489 DEBUG nova.objects.instance [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'pci_devices' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.701 189489 DEBUG nova.virt.libvirt.driver [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <uuid>ea685573-5d12-4d41-8c8d-1d73dc63399d</uuid>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <name>instance-00000009</name>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:name>tempest-ServerActionsTestJSON-server-153023418</nova:name>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:51:46</nova:creationTime>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:user uuid="b595faab5dfa4b4e9aff6a34b1473172">tempest-ServerActionsTestJSON-1517137287-project-member</nova:user>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:project uuid="79e3732a895b43ce86538671ea9e7670">tempest-ServerActionsTestJSON-1517137287</nova:project>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        <nova:port uuid="471b576d-abd9-4813-915c-33fdffb4ae94">
Nov 29 15:51:46 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <system>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <entry name="serial">ea685573-5d12-4d41-8c8d-1d73dc63399d</entry>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <entry name="uuid">ea685573-5d12-4d41-8c8d-1d73dc63399d</entry>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </system>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <os>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </os>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <features>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </features>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.config"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:b8:50:d3"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <target dev="tap471b576d-ab"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/console.log" append="off"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <video>
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </video>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <input type="keyboard" bus="usb"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:51:46 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:51:46 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:51:46 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:51:46 compute-0 nova_compute[189485]: </domain>
Nov 29 15:51:46 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.703 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.765 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.767 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.830 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.832 189489 DEBUG nova.objects.instance [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'trusted_certs' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.853 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.942 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.944 189489 DEBUG nova.virt.disk.api [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Checking if we can resize image /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:51:46 compute-0 nova_compute[189485]: 2025-11-29 15:51:46.945 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.008 189489 DEBUG oslo_concurrency.processutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.010 189489 DEBUG nova.virt.disk.api [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Cannot resize image /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.011 189489 DEBUG nova.objects.instance [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'migration_context' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.029 189489 DEBUG nova.virt.libvirt.vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:50:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-153023418',display_name='tempest-ServerActionsTestJSON-server-153023418',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-153023418',id=9,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHe84/Vw1/UE6MjH9hSoZ8S+lF+m9Cdu9Av7vTw88OmQpmBt5taKTJ/r+cWSkzwOPRZEvDuFb+SsqaHgLTHP3NrHdnllgdosFCEIeqEnWDvyEA3QKG1liQQzPUp2/9l1bw==',key_name='tempest-keypair-106632266',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='79e3732a895b43ce86538671ea9e7670',ramdisk_id='',reservation_id='r-7ix6aam2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1517137287',owner_user_name='tempest-ServerActionsTestJSON-1517137287-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:51:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b595faab5dfa4b4e9aff6a34b1473172',uuid=ea685573-5d12-4d41-8c8d-1d73dc63399d,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.030 189489 DEBUG nova.network.os_vif_util [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converting VIF {"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.032 189489 DEBUG nova.network.os_vif_util [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.033 189489 DEBUG os_vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.035 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.036 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.038 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.043 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.044 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap471b576d-ab, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.045 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap471b576d-ab, col_values=(('external_ids', {'iface-id': '471b576d-abd9-4813-915c-33fdffb4ae94', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:b8:50:d3', 'vm-uuid': 'ea685573-5d12-4d41-8c8d-1d73dc63399d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.047 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.0511] manager: (tap471b576d-ab): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/54)
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.052 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.053 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.055 189489 INFO os_vif [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab')#033[00m
Nov 29 15:51:47 compute-0 kernel: tap471b576d-ab: entered promiscuous mode
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.166 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 ovn_controller[97827]: 2025-11-29T15:51:47Z|00118|binding|INFO|Claiming lport 471b576d-abd9-4813-915c-33fdffb4ae94 for this chassis.
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.1691] manager: (tap471b576d-ab): new Tun device (/org/freedesktop/NetworkManager/Devices/55)
Nov 29 15:51:47 compute-0 ovn_controller[97827]: 2025-11-29T15:51:47Z|00119|binding|INFO|471b576d-abd9-4813-915c-33fdffb4ae94: Claiming fa:16:3e:b8:50:d3 10.100.0.11
Nov 29 15:51:47 compute-0 systemd-udevd[252434]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.175 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:50:d3 10.100.0.11'], port_security=['fa:16:3e:b8:50:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ea685573-5d12-4d41-8c8d-1d73dc63399d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79e3732a895b43ce86538671ea9e7670', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd8e2a464-eef4-4c41-a809-d94caef28d98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02d3693f-5198-43ab-859b-ff500142407c, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=471b576d-abd9-4813-915c-33fdffb4ae94) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.176 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 471b576d-abd9-4813-915c-33fdffb4ae94 in datapath 29b0dade-4512-451e-9fdc-1b8d13fd5972 bound to our chassis#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.177 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 29b0dade-4512-451e-9fdc-1b8d13fd5972#033[00m
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.1823] device (tap471b576d-ab): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.1829] device (tap471b576d-ab): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.194 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[28316cd1-3d7f-462f-8410-338b50a8b780]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.195 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap29b0dade-41 in ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.197 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap29b0dade-40 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.197 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[e76f7bdd-7ab1-490a-ba02-674a87b92134]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.198 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[2da349ae-5fd7-4ea4-8bc9-2c0c25813cf6]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_controller[97827]: 2025-11-29T15:51:47Z|00120|binding|INFO|Setting lport 471b576d-abd9-4813-915c-33fdffb4ae94 ovn-installed in OVS
Nov 29 15:51:47 compute-0 ovn_controller[97827]: 2025-11-29T15:51:47Z|00121|binding|INFO|Setting lport 471b576d-abd9-4813-915c-33fdffb4ae94 up in Southbound
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.202 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.208 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[8fc04916-c88e-40f6-b606-1d32a12796a8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 systemd-machined[155802]: New machine qemu-11-instance-00000009.
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.244 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a45332f5-e3e0-470b-9f84-6d05ec40ecd9]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-00000009.
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.291 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[be439be4-06d3-4390-a5e4-a8bd12da86b7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.2990] manager: (tap29b0dade-40): new Veth device (/org/freedesktop/NetworkManager/Devices/56)
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.297 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6e6fee0e-82b2-41a0-ad01-ff08fdc6f929]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.333 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[94c57952-2a25-4130-be4d-7913d625405e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.336 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[a8c57b94-db58-444f-8ca1-5e526facc7d7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.3628] device (tap29b0dade-40): carrier: link connected
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.367 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[f90e12f5-7f09-4d9a-885c-deec100c8f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.385 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[7d496339-a6a2-4f48-8af5-8b3adf8da462]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29b0dade-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:85:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526301, 'reachable_time': 19323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252579, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.402 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[75fcc7b7-9135-4b97-b1e1-a18495034ffb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec1:85c8'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 526301, 'tstamp': 526301}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252580, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.419 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[76e60b53-57b0-4c12-a3f4-43f6ac13d6fd]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap29b0dade-41'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c1:85:c8'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526301, 'reachable_time': 19323, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252581, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.449 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6834f23d-ba0b-4f5d-8cdf-394b4b845860]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.510 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6453661a-f388-4114-ac86-6381c246dd4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.511 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b0dade-40, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.511 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.511 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap29b0dade-40, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:47 compute-0 kernel: tap29b0dade-40: entered promiscuous mode
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.514 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 NetworkManager[56360]: <info>  [1764431507.5151] manager: (tap29b0dade-40): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/57)
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.516 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.516 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap29b0dade-40, col_values=(('external_ids', {'iface-id': '0c9e125e-3b1f-4aef-b336-cdad32359771'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:47 compute-0 ovn_controller[97827]: 2025-11-29T15:51:47Z|00122|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.521 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.521 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/29b0dade-4512-451e-9fdc-1b8d13fd5972.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/29b0dade-4512-451e-9fdc-1b8d13fd5972.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.522 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9e206709-9ac6-4d0a-8a00-c9b66f7b5269]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.523 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-29b0dade-4512-451e-9fdc-1b8d13fd5972
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/29b0dade-4512-451e-9fdc-1b8d13fd5972.pid.haproxy
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 29b0dade-4512-451e-9fdc-1b8d13fd5972
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:51:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:47.524 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'env', 'PROCESS_TAG=haproxy-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/29b0dade-4512-451e-9fdc-1b8d13fd5972.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.542 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.659 189489 DEBUG nova.virt.libvirt.host [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Removed pending event for ea685573-5d12-4d41-8c8d-1d73dc63399d due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.660 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431507.6587336, ea685573-5d12-4d41-8c8d-1d73dc63399d => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.660 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.663 189489 DEBUG nova.compute.manager [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.668 189489 INFO nova.virt.libvirt.driver [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance rebooted successfully.#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.669 189489 DEBUG nova.compute.manager [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.695 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.700 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.729 189489 DEBUG oslo_concurrency.lockutils [None req-5e99031d-286f-4f33-a1b3-d8c7575406c5 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 4.395s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.731 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.731 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431507.6630292, ea685573-5d12-4d41-8c8d-1d73dc63399d => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.731 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] VM Started (Lifecycle Event)#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.750 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:47 compute-0 nova_compute[189485]: 2025-11-29 15:51:47.756 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: None, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:51:48 compute-0 podman[252617]: 2025-11-29 15:51:48.024538474 +0000 UTC m=+0.069072208 container create 87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Nov 29 15:51:48 compute-0 podman[252617]: 2025-11-29 15:51:47.98757187 +0000 UTC m=+0.032105324 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:51:48 compute-0 systemd[1]: Started libpod-conmon-87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097.scope.
Nov 29 15:51:48 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:51:48 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/25901e556368186a7ec910056ed497531bda0e2d0a7263f8af7701fe8ba9a24b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:51:48 compute-0 podman[252617]: 2025-11-29 15:51:48.20809721 +0000 UTC m=+0.252630704 container init 87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 15:51:48 compute-0 podman[252617]: 2025-11-29 15:51:48.217793981 +0000 UTC m=+0.262327445 container start 87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:48 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [NOTICE]   (252635) : New worker (252637) forked
Nov 29 15:51:48 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [NOTICE]   (252635) : Loading success.
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.008 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.008 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.024 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.093 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.094 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.101 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.101 189489 INFO nova.compute.claims [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.252 189489 DEBUG nova.compute.provider_tree [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.273 189489 DEBUG nova.scheduler.client.report [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.298 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.299 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.347 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.348 189489 DEBUG nova.network.neutron [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.370 189489 INFO nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:51:49 compute-0 ovn_controller[97827]: 2025-11-29T15:51:49Z|00123|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.390 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.472 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.501 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.503 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.504 189489 INFO nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Creating image(s)#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.505 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "/var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.505 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "/var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.506 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "/var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.507 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "bc62df192b9cc3765848644231821ffd9bd86fa9" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:49 compute-0 nova_compute[189485]: 2025-11-29 15:51:49.508 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "bc62df192b9cc3765848644231821ffd9bd86fa9" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.174 189489 DEBUG nova.policy [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '997fde32c4f7472e87493536b60e7b64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.625 189489 DEBUG nova.compute.manager [req-da96588f-309d-4074-84dc-2494a31125d7 req-d4fa8132-fb77-40fb-9a12-247f5b88167c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-unplugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.626 189489 DEBUG oslo_concurrency.lockutils [req-da96588f-309d-4074-84dc-2494a31125d7 req-d4fa8132-fb77-40fb-9a12-247f5b88167c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.626 189489 DEBUG oslo_concurrency.lockutils [req-da96588f-309d-4074-84dc-2494a31125d7 req-d4fa8132-fb77-40fb-9a12-247f5b88167c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.626 189489 DEBUG oslo_concurrency.lockutils [req-da96588f-309d-4074-84dc-2494a31125d7 req-d4fa8132-fb77-40fb-9a12-247f5b88167c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.626 189489 DEBUG nova.compute.manager [req-da96588f-309d-4074-84dc-2494a31125d7 req-d4fa8132-fb77-40fb-9a12-247f5b88167c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-unplugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.627 189489 WARNING nova.compute.manager [req-da96588f-309d-4074-84dc-2494a31125d7 req-d4fa8132-fb77-40fb-9a12-247f5b88167c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received unexpected event network-vif-unplugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:51:50 compute-0 nova_compute[189485]: 2025-11-29 15:51:50.650 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:50 compute-0 podman[252647]: 2025-11-29 15:51:50.653463695 +0000 UTC m=+0.104687387 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:51:51 compute-0 nova_compute[189485]: 2025-11-29 15:51:51.710 189489 DEBUG nova.network.neutron [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Successfully created port: 28ff21af-c272-489e-85c2-27ab6ad320db _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:51:52 compute-0 nova_compute[189485]: 2025-11-29 15:51:52.048 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:52 compute-0 nova_compute[189485]: 2025-11-29 15:51:52.662 189489 DEBUG nova.network.neutron [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Successfully updated port: 28ff21af-c272-489e-85c2-27ab6ad320db _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:51:52 compute-0 nova_compute[189485]: 2025-11-29 15:51:52.681 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:52 compute-0 nova_compute[189485]: 2025-11-29 15:51:52.683 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:52 compute-0 nova_compute[189485]: 2025-11-29 15:51:52.684 189489 DEBUG nova.network.neutron [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:51:52 compute-0 nova_compute[189485]: 2025-11-29 15:51:52.994 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.023 189489 DEBUG nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.025 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.026 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.026 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.027 189489 DEBUG nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.028 189489 WARNING nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received unexpected event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.028 189489 DEBUG nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.029 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.030 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.030 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.031 189489 DEBUG nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.032 189489 WARNING nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received unexpected event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.033 189489 DEBUG nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.033 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.034 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.035 189489 DEBUG oslo_concurrency.lockutils [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.036 189489 DEBUG nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.037 189489 WARNING nova.compute.manager [req-e8895678-ec85-46af-a2e4-27f676e2e77a req-336608a8-5815-4b13-9313-60380aed800d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received unexpected event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.075 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.part --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.076 189489 DEBUG nova.virt.images [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] 276c0a04-08bd-40bb-ad7b-a0be69fa4466 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.078 189489 DEBUG nova.privsep.utils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.079 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.part /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.127 189489 DEBUG nova.network.neutron [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.404 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.part /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.converted" returned: 0 in 0.325s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.408 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.503 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9.converted --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.505 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "bc62df192b9cc3765848644231821ffd9bd86fa9" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 3.997s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.521 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.579 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.580 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "bc62df192b9cc3765848644231821ffd9bd86fa9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.581 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "bc62df192b9cc3765848644231821ffd9bd86fa9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.592 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.681 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.683 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9,backing_fmt=raw /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.720 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9,backing_fmt=raw /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk 1073741824" returned: 0 in 0.037s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.721 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "bc62df192b9cc3765848644231821ffd9bd86fa9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.140s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.722 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.783 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.784 189489 DEBUG nova.virt.disk.api [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Checking if we can resize image /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.785 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.843 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.844 189489 DEBUG nova.virt.disk.api [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Cannot resize image /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.844 189489 DEBUG nova.objects.instance [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lazy-loading 'migration_context' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.864 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.864 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Ensure instance console log exists: /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.865 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.866 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:53 compute-0 nova_compute[189485]: 2025-11-29 15:51:53.867 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.767 189489 DEBUG nova.network.neutron [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.789 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.790 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Instance network_info: |[{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.793 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Start _get_guest_xml network_info=[{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:51:36Z,direct_url=<?>,disk_format='qcow2',id=276c0a04-08bd-40bb-ad7b-a0be69fa4466,min_disk=0,min_ram=0,name='tempest-scenario-img--1468111566',owner='cb266773cd4c4eb0904e7249f2b6cb92',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:51:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.805 189489 WARNING nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.813 189489 DEBUG nova.virt.libvirt.host [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.814 189489 DEBUG nova.virt.libvirt.host [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.819 189489 DEBUG nova.virt.libvirt.host [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.820 189489 DEBUG nova.virt.libvirt.host [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.820 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.821 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:51:36Z,direct_url=<?>,disk_format='qcow2',id=276c0a04-08bd-40bb-ad7b-a0be69fa4466,min_disk=0,min_ram=0,name='tempest-scenario-img--1468111566',owner='cb266773cd4c4eb0904e7249f2b6cb92',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:51:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.822 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.822 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.823 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.823 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.824 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.824 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.825 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.826 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.826 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.827 189489 DEBUG nova.virt.hardware [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.831 189489 DEBUG nova.virt.libvirt.vif [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl',id=11,image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='4838e190-17b5-46fc-b5c5-64e289c1eccb'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb266773cd4c4eb0904e7249f2b6cb92',ramdisk_id='',reservation_id='r-ljx3hz30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-739897620',owner_user_name='tempest-PrometheusGabbiTest-739897620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:51:49Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='997fde32c4f7472e87493536b60e7b64',uuid=2c879d1e-7499-4665-8880-438b30ff9d86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.831 189489 DEBUG nova.network.os_vif_util [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converting VIF {"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.832 189489 DEBUG nova.network.os_vif_util [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.834 189489 DEBUG nova.objects.instance [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lazy-loading 'pci_devices' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.860 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <uuid>2c879d1e-7499-4665-8880-438b30ff9d86</uuid>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <name>instance-0000000b</name>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:name>te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl</nova:name>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:51:54</nova:creationTime>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:user uuid="997fde32c4f7472e87493536b60e7b64">tempest-PrometheusGabbiTest-739897620-project-member</nova:user>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:project uuid="cb266773cd4c4eb0904e7249f2b6cb92">tempest-PrometheusGabbiTest-739897620</nova:project>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="276c0a04-08bd-40bb-ad7b-a0be69fa4466"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        <nova:port uuid="28ff21af-c272-489e-85c2-27ab6ad320db">
Nov 29 15:51:54 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.3.44" ipVersion="4"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <system>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <entry name="serial">2c879d1e-7499-4665-8880-438b30ff9d86</entry>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <entry name="uuid">2c879d1e-7499-4665-8880-438b30ff9d86</entry>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </system>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <os>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </os>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <features>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </features>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.config"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:82:93:16"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <target dev="tap28ff21af-c2"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/console.log" append="off"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <video>
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </video>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:51:54 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:51:54 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:51:54 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:51:54 compute-0 nova_compute[189485]: </domain>
Nov 29 15:51:54 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.862 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Preparing to wait for external event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.862 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.863 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.864 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.865 189489 DEBUG nova.virt.libvirt.vif [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl',id=11,image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='4838e190-17b5-46fc-b5c5-64e289c1eccb'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb266773cd4c4eb0904e7249f2b6cb92',ramdisk_id='',reservation_id='r-ljx3hz30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-739897620',owner_user_name='tempest-PrometheusGabbiTest-739897620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:51:49Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='997fde32c4f7472e87493536b60e7b64',uuid=2c879d1e-7499-4665-8880-438b30ff9d86,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.865 189489 DEBUG nova.network.os_vif_util [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converting VIF {"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.866 189489 DEBUG nova.network.os_vif_util [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.867 189489 DEBUG os_vif [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.868 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.869 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.870 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.874 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.874 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap28ff21af-c2, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.875 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap28ff21af-c2, col_values=(('external_ids', {'iface-id': '28ff21af-c272-489e-85c2-27ab6ad320db', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:82:93:16', 'vm-uuid': '2c879d1e-7499-4665-8880-438b30ff9d86'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.877 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:54 compute-0 NetworkManager[56360]: <info>  [1764431514.8785] manager: (tap28ff21af-c2): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/58)
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.879 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.889 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:54 compute-0 nova_compute[189485]: 2025-11-29 15:51:54.890 189489 INFO os_vif [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2')#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.042 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.042 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.043 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] No VIF found with MAC fa:16:3e:82:93:16, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.043 189489 INFO nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Using config drive#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.378 189489 DEBUG nova.compute.manager [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-changed-28ff21af-c272-489e-85c2-27ab6ad320db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.378 189489 DEBUG nova.compute.manager [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Refreshing instance network info cache due to event network-changed-28ff21af-c272-489e-85c2-27ab6ad320db. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.379 189489 DEBUG oslo_concurrency.lockutils [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.379 189489 DEBUG oslo_concurrency.lockutils [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.380 189489 DEBUG nova.network.neutron [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Refreshing network info cache for port 28ff21af-c272-489e-85c2-27ab6ad320db _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:51:55 compute-0 nova_compute[189485]: 2025-11-29 15:51:55.654 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.304 189489 INFO nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Creating config drive at /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.config#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.312 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi4_56rp2 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.455 189489 DEBUG oslo_concurrency.processutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpi4_56rp2" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:51:56 compute-0 kernel: tap28ff21af-c2: entered promiscuous mode
Nov 29 15:51:56 compute-0 NetworkManager[56360]: <info>  [1764431516.5598] manager: (tap28ff21af-c2): new Tun device (/org/freedesktop/NetworkManager/Devices/59)
Nov 29 15:51:56 compute-0 ovn_controller[97827]: 2025-11-29T15:51:56Z|00124|binding|INFO|Claiming lport 28ff21af-c272-489e-85c2-27ab6ad320db for this chassis.
Nov 29 15:51:56 compute-0 ovn_controller[97827]: 2025-11-29T15:51:56Z|00125|binding|INFO|28ff21af-c272-489e-85c2-27ab6ad320db: Claiming fa:16:3e:82:93:16 10.100.3.44
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.565 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.579 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:93:16 10.100.3.44'], port_security=['fa:16:3e:82:93:16 10.100.3.44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.44/16', 'neutron:device_id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5e134a6-ec2b-4ce9-9b80-87ce5b922531', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=517fd69e-9ef0-4dda-87e3-69c54b736518, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=28ff21af-c272-489e-85c2-27ab6ad320db) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.580 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 28ff21af-c272-489e-85c2-27ab6ad320db in datapath 7871c73c-0a09-4317-aff1-d5a297fb41ee bound to our chassis#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.582 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7871c73c-0a09-4317-aff1-d5a297fb41ee#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.595 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9848506f-0620-49c6-b0cb-70863185b588]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.597 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap7871c73c-01 in ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.600 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap7871c73c-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.600 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[e5fb3984-f946-4305-8c52-88aa65aeb067]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.603 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[aeb5652c-fc63-4f95-b759-0440b2fac458]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 systemd-machined[155802]: New machine qemu-12-instance-0000000b.
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.627 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[4255e0b9-8a96-406c-8aa6-3046071c2411]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.629 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-0000000b.
Nov 29 15:51:56 compute-0 ovn_controller[97827]: 2025-11-29T15:51:56Z|00126|binding|INFO|Setting lport 28ff21af-c272-489e-85c2-27ab6ad320db ovn-installed in OVS
Nov 29 15:51:56 compute-0 ovn_controller[97827]: 2025-11-29T15:51:56Z|00127|binding|INFO|Setting lport 28ff21af-c272-489e-85c2-27ab6ad320db up in Southbound
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.637 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 systemd-udevd[252721]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.663 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[3475b440-0268-4c11-9ce5-7dd50c8079a0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 NetworkManager[56360]: <info>  [1764431516.6780] device (tap28ff21af-c2): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:51:56 compute-0 NetworkManager[56360]: <info>  [1764431516.6877] device (tap28ff21af-c2): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.700 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[abcab7f7-4f1b-4f48-8af6-6a1a18ff78a2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 systemd-udevd[252724]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:51:56 compute-0 NetworkManager[56360]: <info>  [1764431516.7134] manager: (tap7871c73c-00): new Veth device (/org/freedesktop/NetworkManager/Devices/60)
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.706 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[2ada37ca-6d27-49c0-a256-3b1f39d5c058]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.743 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[f412b428-d5b1-4391-ae9b-9be035bce4fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.746 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[efebda70-f6dd-44fd-8c96-dae02d6bd425]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 NetworkManager[56360]: <info>  [1764431516.7702] device (tap7871c73c-00): carrier: link connected
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.773 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[30f8d3fd-ee96-4fd6-a91b-6010dee79988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.789 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[cbe7e6cc-b065-42f6-aec7-aa6e3675152b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7871c73c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:cd:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527242, 'reachable_time': 16181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252751, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.805 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[61e831b5-fac5-40c8-9851-ff04e4e9a877]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee8:cd76'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527242, 'tstamp': 527242}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252752, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.821 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[999b722f-09fa-48b6-b6bf-64538ce0b81a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7871c73c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:cd:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527242, 'reachable_time': 16181, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252753, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.846 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[c73aa546-f8df-4663-9e44-5a9c59619bb6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.892 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[5e5e0379-aa49-4635-8ac5-aa25745fc69e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.894 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7871c73c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.894 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.895 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7871c73c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.897 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 NetworkManager[56360]: <info>  [1764431516.8980] manager: (tap7871c73c-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/61)
Nov 29 15:51:56 compute-0 kernel: tap7871c73c-00: entered promiscuous mode
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.900 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.901 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7871c73c-00, col_values=(('external_ids', {'iface-id': '44ccce0e-f764-41d1-8796-ff08932a6de2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.902 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 ovn_controller[97827]: 2025-11-29T15:51:56Z|00128|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.921 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 nova_compute[189485]: 2025-11-29 15:51:56.926 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.927 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/7871c73c-0a09-4317-aff1-d5a297fb41ee.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/7871c73c-0a09-4317-aff1-d5a297fb41ee.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.928 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[5ed8e9fc-bf21-49b0-807a-c8aa80fe4ce1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.929 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-7871c73c-0a09-4317-aff1-d5a297fb41ee
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/7871c73c-0a09-4317-aff1-d5a297fb41ee.pid.haproxy
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 7871c73c-0a09-4317-aff1-d5a297fb41ee
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:51:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:56.929 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'env', 'PROCESS_TAG=haproxy-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/7871c73c-0a09-4317-aff1-d5a297fb41ee.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:51:57 compute-0 podman[252785]: 2025-11-29 15:51:57.32820507 +0000 UTC m=+0.077909487 container create 2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:51:57 compute-0 systemd[1]: Started libpod-conmon-2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7.scope.
Nov 29 15:51:57 compute-0 podman[252785]: 2025-11-29 15:51:57.297417132 +0000 UTC m=+0.047121539 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:51:57 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:51:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/93b3e9b9ca697d8b61296997940e032b66094a616806238f6283bf74cb18cde1/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:51:57 compute-0 podman[252785]: 2025-11-29 15:51:57.451982328 +0000 UTC m=+0.201686755 container init 2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:51:57 compute-0 podman[252785]: 2025-11-29 15:51:57.462156563 +0000 UTC m=+0.211860960 container start 2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:51:57 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [NOTICE]   (252807) : New worker (252809) forked
Nov 29 15:51:57 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [NOTICE]   (252807) : Loading success.
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.551 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431517.551461, 2c879d1e-7499-4665-8880-438b30ff9d86 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.552 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] VM Started (Lifecycle Event)#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.568 189489 DEBUG nova.compute.manager [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.569 189489 DEBUG oslo_concurrency.lockutils [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.569 189489 DEBUG oslo_concurrency.lockutils [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.569 189489 DEBUG oslo_concurrency.lockutils [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.570 189489 DEBUG nova.compute.manager [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Processing event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.570 189489 DEBUG nova.compute.manager [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.570 189489 DEBUG oslo_concurrency.lockutils [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.570 189489 DEBUG oslo_concurrency.lockutils [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.571 189489 DEBUG oslo_concurrency.lockutils [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.571 189489 DEBUG nova.compute.manager [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] No waiting events found dispatching network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.571 189489 WARNING nova.compute.manager [req-618a7fe5-abb3-43d4-97ac-d7f8e9f31e44 req-37f103bc-0a3d-4df1-a4d2-aeca54dbce32 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received unexpected event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db for instance with vm_state building and task_state spawning.#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.572 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.577 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.582 189489 INFO nova.virt.libvirt.driver [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Instance spawned successfully.#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.592 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.602 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.607 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.620 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.620 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.621 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.621 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.622 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.622 189489 DEBUG nova.virt.libvirt.driver [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.645 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.645 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431517.5515492, 2c879d1e-7499-4665-8880-438b30ff9d86 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.646 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.684 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.689 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431517.5764015, 2c879d1e-7499-4665-8880-438b30ff9d86 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.689 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.704 189489 INFO nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Took 8.20 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.704 189489 DEBUG nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.709 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.718 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.748 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.769 189489 INFO nova.compute.manager [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Took 8.70 seconds to build instance.#033[00m
Nov 29 15:51:57 compute-0 nova_compute[189485]: 2025-11-29 15:51:57.784 189489 DEBUG oslo_concurrency.lockutils [None req-bc23126b-adde-4ede-aee2-3bd3c7fa66a6 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.776s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:58 compute-0 nova_compute[189485]: 2025-11-29 15:51:58.270 189489 DEBUG nova.network.neutron [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated VIF entry in instance network info cache for port 28ff21af-c272-489e-85c2-27ab6ad320db. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:51:58 compute-0 nova_compute[189485]: 2025-11-29 15:51:58.271 189489 DEBUG nova.network.neutron [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:51:58 compute-0 nova_compute[189485]: 2025-11-29 15:51:58.287 189489 DEBUG oslo_concurrency.lockutils [req-ab1e70b1-42c7-4fb4-a023-38855cc63f3e req-511ec7b2-35c5-451b-b934-4be4ad224e62 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:51:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:59.210 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:51:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:59.211 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:51:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:51:59.213 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:51:59 compute-0 podman[252819]: 2025-11-29 15:51:59.691135926 +0000 UTC m=+0.124118908 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 15:51:59 compute-0 podman[203677]: time="2025-11-29T15:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:51:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:51:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5255 "" "Go-http-client/1.1"
Nov 29 15:51:59 compute-0 nova_compute[189485]: 2025-11-29 15:51:59.878 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:00 compute-0 nova_compute[189485]: 2025-11-29 15:52:00.656 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.060 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.061 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.062 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.069 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance ea685573-5d12-4d41-8c8d-1d73dc63399d from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:52:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:01.070 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/ea685573-5d12-4d41-8c8d-1d73dc63399d -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: ERROR   15:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: ERROR   15:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: ERROR   15:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: ERROR   15:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:52:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:52:01 compute-0 podman[252841]: 2025-11-29 15:52:01.676861959 +0000 UTC m=+0.111876819 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:52:01 compute-0 podman[252840]: 2025-11-29 15:52:01.698369888 +0000 UTC m=+0.124052378 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, name=ubi9, io.buildah.version=1.29.0, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc.)
Nov 29 15:52:01 compute-0 podman[252842]: 2025-11-29 15:52:01.700501205 +0000 UTC m=+0.113576056 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2)
Nov 29 15:52:01 compute-0 podman[252859]: 2025-11-29 15:52:01.732932767 +0000 UTC m=+0.134004775 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=edpm, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7)
Nov 29 15:52:01 compute-0 podman[252848]: 2025-11-29 15:52:01.73302014 +0000 UTC m=+0.144757845 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:52:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:02.408 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1979 Content-Type: application/json Date: Sat, 29 Nov 2025 15:52:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-bd4a9a31-433f-447d-93b8-15971826aee3 x-openstack-request-id: req-bd4a9a31-433f-447d-93b8-15971826aee3 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:52:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:02.408 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "ea685573-5d12-4d41-8c8d-1d73dc63399d", "name": "tempest-ServerActionsTestJSON-server-153023418", "status": "ACTIVE", "tenant_id": "79e3732a895b43ce86538671ea9e7670", "user_id": "b595faab5dfa4b4e9aff6a34b1473172", "metadata": {}, "hostId": "438890a87809354fd4b3dfbb91a0bc5e0bb25964d9f205a2f2644992", "image": {"id": "6a931c3a-089f-4276-ac71-a0da3ffce7c7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/6a931c3a-089f-4276-ac71-a0da3ffce7c7"}]}, "flavor": {"id": "cde1daa0-956a-446c-a1eb-2046e0cd1fa7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cde1daa0-956a-446c-a1eb-2046e0cd1fa7"}]}, "created": "2025-11-29T15:50:11Z", "updated": "2025-11-29T15:51:47Z", "addresses": {"tempest-ServerActionsTestJSON-1500630099-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b8:50:d3"}, {"version": 4, "addr": "192.168.122.245", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:b8:50:d3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/ea685573-5d12-4d41-8c8d-1d73dc63399d"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/ea685573-5d12-4d41-8c8d-1d73dc63399d"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-106632266", "OS-SRV-USG:launched_at": "2025-11-29T15:50:26.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--275343292"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:52:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:02.408 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/ea685573-5d12-4d41-8c8d-1d73dc63399d used request id req-bd4a9a31-433f-447d-93b8-15971826aee3 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:52:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:02.411 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'ea685573-5d12-4d41-8c8d-1d73dc63399d', 'name': 'tempest-ServerActionsTestJSON-server-153023418', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '79e3732a895b43ce86538671ea9e7670', 'user_id': 'b595faab5dfa4b4e9aff6a34b1473172', 'hostId': '438890a87809354fd4b3dfbb91a0bc5e0bb25964d9f205a2f2644992', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:52:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:02.414 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 2c879d1e-7499-4665-8880-438b30ff9d86 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:52:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:02.415 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/2c879d1e-7499-4665-8880-438b30ff9d86 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.836 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1831 Content-Type: application/json Date: Sat, 29 Nov 2025 15:52:02 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-284e716e-036f-43ed-8ad7-bf0b39dd509c x-openstack-request-id: req-284e716e-036f-43ed-8ad7-bf0b39dd509c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.837 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "2c879d1e-7499-4665-8880-438b30ff9d86", "name": "te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl", "status": "ACTIVE", "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "user_id": "997fde32c4f7472e87493536b60e7b64", "metadata": {"metering.server_group": "4838e190-17b5-46fc-b5c5-64e289c1eccb"}, "hostId": "ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a", "image": {"id": "276c0a04-08bd-40bb-ad7b-a0be69fa4466", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/276c0a04-08bd-40bb-ad7b-a0be69fa4466"}]}, "flavor": {"id": "cde1daa0-956a-446c-a1eb-2046e0cd1fa7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cde1daa0-956a-446c-a1eb-2046e0cd1fa7"}]}, "created": "2025-11-29T15:51:47Z", "updated": "2025-11-29T15:51:57Z", "addresses": {"": [{"version": 4, "addr": "10.100.3.44", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:82:93:16"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/2c879d1e-7499-4665-8880-438b30ff9d86"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/2c879d1e-7499-4665-8880-438b30ff9d86"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-29T15:51:57.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.837 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/2c879d1e-7499-4665-8880-438b30ff9d86 used request id req-284e716e-036f-43ed-8ad7-bf0b39dd509c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.840 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.840 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.842 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.842 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:52:03.842479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.848 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for ea685573-5d12-4d41-8c8d-1d73dc63399d / tap471b576d-ab inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.849 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.854 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 2c879d1e-7499-4665-8880-438b30ff9d86 / tap28ff21af-c2 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.855 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.857 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:52:03.859364) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.860 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.861 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.863 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:52:03.865326) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.865 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.904 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.904 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance ea685573-5d12-4d41-8c8d-1d73dc63399d: ceilometer.compute.pollsters.NoVolumeException
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.946 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.946 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 2c879d1e-7499-4665-8880-438b30ff9d86: ceilometer.compute.pollsters.NoVolumeException
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.948 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.949 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.950 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:52:03.949747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.951 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.952 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.952 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.953 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.954 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-29T15:52:03.954423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.955 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.955 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-153023418>, <NovaLikeServer: te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-153023418>, <NovaLikeServer: te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl>]
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.958 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.958 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.959 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.958 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:52:03.958440) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.959 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.962 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.962 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.963 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.963 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:52:03.963362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.964 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.964 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.966 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:03 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:03.967 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:52:03.967286) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.023 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.024 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.084 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.085 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.086 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.087 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.087 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.087 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.087 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.088 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.088 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:52:04.087906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.088 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.089 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.090 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:52:04.091392) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.110 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.111 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.124 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.125 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.127 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.128 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/cpu volume: 15630000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.128 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:52:04.128130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.129 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 6090000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.130 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.131 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.132 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.read.latency volume: 373287757 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.132 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:52:04.131969) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.132 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.read.latency volume: 179386534 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.133 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 399852999 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.133 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 1313765 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.134 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.135 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.135 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.135 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.136 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.136 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.136 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-29T15:52:04.136446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.136 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.137 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-153023418>, <NovaLikeServer: te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-153023418>, <NovaLikeServer: te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl>]
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.137 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.138 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.138 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.139 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.139 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.139 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.139 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:52:04.139288) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.140 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.140 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.141 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.142 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.143 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.144 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:52:04.144369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.145 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.145 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.146 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.148 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.149 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:52:04.149347) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.150 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.150 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.151 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.154 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:52:04.154034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.155 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.155 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.156 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.157 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.158 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:52:04.158437) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.159 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:52:04.162008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.162 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.163 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.163 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.164 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.165 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.166 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.166 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.166 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:52:04.166554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.167 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.167 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.169 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:52:04.169160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.169 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.170 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.171 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.171 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:52:04.170963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.171 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.171 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.172 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.173 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.174 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.174 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:52:04.174071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.174 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.175 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.175 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:52:04.176536) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.177 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.177 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.177 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.178 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.178 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.178 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:52:04.178530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.179 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.179 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.180 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:52:04.181035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.181 14 DEBUG ceilometer.compute.pollsters [-] ea685573-5d12-4d41-8c8d-1d73dc63399d/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.181 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.182 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.183 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:52:04.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:52:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:04.613 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:52:04 compute-0 nova_compute[189485]: 2025-11-29 15:52:04.613 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:04.615 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:52:04 compute-0 ovn_controller[97827]: 2025-11-29T15:52:04Z|00129|binding|INFO|Releasing lport 0c9e125e-3b1f-4aef-b336-cdad32359771 from this chassis (sb_readonly=0)
Nov 29 15:52:04 compute-0 ovn_controller[97827]: 2025-11-29T15:52:04Z|00130|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:52:04 compute-0 nova_compute[189485]: 2025-11-29 15:52:04.881 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:04 compute-0 nova_compute[189485]: 2025-11-29 15:52:04.891 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:05 compute-0 nova_compute[189485]: 2025-11-29 15:52:05.659 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:06 compute-0 nova_compute[189485]: 2025-11-29 15:52:06.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:06 compute-0 nova_compute[189485]: 2025-11-29 15:52:06.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:52:06 compute-0 nova_compute[189485]: 2025-11-29 15:52:06.508 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:52:06 compute-0 podman[252934]: 2025-11-29 15:52:06.676868146 +0000 UTC m=+0.120080261 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 15:52:09 compute-0 podman[252951]: 2025-11-29 15:52:09.611074375 +0000 UTC m=+0.064533336 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:52:09 compute-0 nova_compute[189485]: 2025-11-29 15:52:09.883 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:10 compute-0 nova_compute[189485]: 2025-11-29 15:52:10.662 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:11.617 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:52:14 compute-0 nova_compute[189485]: 2025-11-29 15:52:14.886 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:15 compute-0 nova_compute[189485]: 2025-11-29 15:52:15.666 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:17 compute-0 nova_compute[189485]: 2025-11-29 15:52:17.508 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:19 compute-0 nova_compute[189485]: 2025-11-29 15:52:19.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:19 compute-0 nova_compute[189485]: 2025-11-29 15:52:19.487 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:52:19 compute-0 nova_compute[189485]: 2025-11-29 15:52:19.891 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:20 compute-0 ovn_controller[97827]: 2025-11-29T15:52:20Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:b8:50:d3 10.100.0.11
Nov 29 15:52:20 compute-0 nova_compute[189485]: 2025-11-29 15:52:20.599 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:52:20 compute-0 nova_compute[189485]: 2025-11-29 15:52:20.600 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:52:20 compute-0 nova_compute[189485]: 2025-11-29 15:52:20.601 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:52:20 compute-0 nova_compute[189485]: 2025-11-29 15:52:20.669 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:21 compute-0 podman[252984]: 2025-11-29 15:52:21.663993758 +0000 UTC m=+0.113465982 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:52:23 compute-0 nova_compute[189485]: 2025-11-29 15:52:23.076 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:23 compute-0 nova_compute[189485]: 2025-11-29 15:52:23.655 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updating instance_info_cache with network_info: [{"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:52:23 compute-0 nova_compute[189485]: 2025-11-29 15:52:23.670 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-ea685573-5d12-4d41-8c8d-1d73dc63399d" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:52:23 compute-0 nova_compute[189485]: 2025-11-29 15:52:23.670 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:52:23 compute-0 nova_compute[189485]: 2025-11-29 15:52:23.671 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:23 compute-0 nova_compute[189485]: 2025-11-29 15:52:23.672 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.509 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.510 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.511 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.511 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.623 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.706 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.707 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.775 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.781 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.866 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.868 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.895 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:24 compute-0 nova_compute[189485]: 2025-11-29 15:52:24.935 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.326 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.327 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5025MB free_disk=72.27691268920898GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.329 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.330 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.542 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance ea685573-5d12-4d41-8c8d-1d73dc63399d actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.547 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.547 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.547 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.608 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.672 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.840 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.842 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.861 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.891 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:52:25 compute-0 nova_compute[189485]: 2025-11-29 15:52:25.958 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:52:26 compute-0 nova_compute[189485]: 2025-11-29 15:52:26.044 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:52:26 compute-0 nova_compute[189485]: 2025-11-29 15:52:26.069 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:52:26 compute-0 nova_compute[189485]: 2025-11-29 15:52:26.070 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.740s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:27 compute-0 nova_compute[189485]: 2025-11-29 15:52:27.069 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:27 compute-0 nova_compute[189485]: 2025-11-29 15:52:27.071 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:29 compute-0 nova_compute[189485]: 2025-11-29 15:52:29.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:29 compute-0 nova_compute[189485]: 2025-11-29 15:52:29.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:52:29 compute-0 podman[203677]: time="2025-11-29T15:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:52:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:52:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5248 "" "Go-http-client/1.1"
Nov 29 15:52:29 compute-0 nova_compute[189485]: 2025-11-29 15:52:29.900 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:30 compute-0 nova_compute[189485]: 2025-11-29 15:52:30.674 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:30 compute-0 podman[253029]: 2025-11-29 15:52:30.698837704 +0000 UTC m=+0.142797012 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=edpm)
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: ERROR   15:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: ERROR   15:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: ERROR   15:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: ERROR   15:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:52:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:52:31 compute-0 nova_compute[189485]: 2025-11-29 15:52:31.555 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:32 compute-0 ovn_controller[97827]: 2025-11-29T15:52:32Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:82:93:16 10.100.3.44
Nov 29 15:52:32 compute-0 ovn_controller[97827]: 2025-11-29T15:52:32Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:82:93:16 10.100.3.44
Nov 29 15:52:32 compute-0 podman[253047]: 2025-11-29 15:52:32.637914071 +0000 UTC m=+0.078708268 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Nov 29 15:52:32 compute-0 podman[253056]: 2025-11-29 15:52:32.649791991 +0000 UTC m=+0.078831421 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, vcs-type=git, maintainer=Red Hat, Inc., name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, config_id=edpm, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 29 15:52:32 compute-0 podman[253048]: 2025-11-29 15:52:32.649870933 +0000 UTC m=+0.087534766 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm)
Nov 29 15:52:32 compute-0 podman[253046]: 2025-11-29 15:52:32.663679754 +0000 UTC m=+0.110298237 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc.)
Nov 29 15:52:32 compute-0 podman[253049]: 2025-11-29 15:52:32.752047841 +0000 UTC m=+0.180246019 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 15:52:34 compute-0 nova_compute[189485]: 2025-11-29 15:52:34.904 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:35 compute-0 nova_compute[189485]: 2025-11-29 15:52:35.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:35 compute-0 nova_compute[189485]: 2025-11-29 15:52:35.678 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:37 compute-0 podman[253142]: 2025-11-29 15:52:37.712400171 +0000 UTC m=+0.155315978 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 15:52:39 compute-0 nova_compute[189485]: 2025-11-29 15:52:39.717 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:39 compute-0 podman[253161]: 2025-11-29 15:52:39.852555286 +0000 UTC m=+0.101199432 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 15:52:39 compute-0 nova_compute[189485]: 2025-11-29 15:52:39.907 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:40 compute-0 nova_compute[189485]: 2025-11-29 15:52:40.681 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:42 compute-0 nova_compute[189485]: 2025-11-29 15:52:42.505 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:52:42 compute-0 nova_compute[189485]: 2025-11-29 15:52:42.507 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:52:44 compute-0 nova_compute[189485]: 2025-11-29 15:52:44.909 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:45 compute-0 nova_compute[189485]: 2025-11-29 15:52:45.683 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:45 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:45.849 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:52:45 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:45.850 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:52:45 compute-0 nova_compute[189485]: 2025-11-29 15:52:45.855 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:48.852 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:52:48 compute-0 nova_compute[189485]: 2025-11-29 15:52:48.890 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:49 compute-0 nova_compute[189485]: 2025-11-29 15:52:49.913 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:50 compute-0 nova_compute[189485]: 2025-11-29 15:52:50.687 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.261 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.262 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.263 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.264 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.264 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.267 189489 INFO nova.compute.manager [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Terminating instance#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.269 189489 DEBUG nova.compute.manager [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:52:52 compute-0 kernel: tap471b576d-ab (unregistering): left promiscuous mode
Nov 29 15:52:52 compute-0 NetworkManager[56360]: <info>  [1764431572.3096] device (tap471b576d-ab): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:52:52 compute-0 ovn_controller[97827]: 2025-11-29T15:52:52Z|00131|binding|INFO|Releasing lport 471b576d-abd9-4813-915c-33fdffb4ae94 from this chassis (sb_readonly=0)
Nov 29 15:52:52 compute-0 ovn_controller[97827]: 2025-11-29T15:52:52Z|00132|binding|INFO|Setting lport 471b576d-abd9-4813-915c-33fdffb4ae94 down in Southbound
Nov 29 15:52:52 compute-0 ovn_controller[97827]: 2025-11-29T15:52:52Z|00133|binding|INFO|Removing iface tap471b576d-ab ovn-installed in OVS
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.350 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.352 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:b8:50:d3 10.100.0.11'], port_security=['fa:16:3e:b8:50:d3 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': 'ea685573-5d12-4d41-8c8d-1d73dc63399d', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '79e3732a895b43ce86538671ea9e7670', 'neutron:revision_number': '6', 'neutron:security_group_ids': 'd8e2a464-eef4-4c41-a809-d94caef28d98', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.245'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=02d3693f-5198-43ab-859b-ff500142407c, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=471b576d-abd9-4813-915c-33fdffb4ae94) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.355 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 471b576d-abd9-4813-915c-33fdffb4ae94 in datapath 29b0dade-4512-451e-9fdc-1b8d13fd5972 unbound from our chassis#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.360 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 29b0dade-4512-451e-9fdc-1b8d13fd5972, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.362 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[3b6939f7-cc7d-4040-a6ca-310047909c4f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.364 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 namespace which is not needed anymore#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.375 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Deactivated successfully.
Nov 29 15:52:52 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d00000009.scope: Consumed 43.139s CPU time.
Nov 29 15:52:52 compute-0 systemd-machined[155802]: Machine qemu-11-instance-00000009 terminated.
Nov 29 15:52:52 compute-0 podman[253188]: 2025-11-29 15:52:52.437466756 +0000 UTC m=+0.107859322 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.546 189489 INFO nova.virt.libvirt.driver [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Instance destroyed successfully.#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.549 189489 DEBUG nova.objects.instance [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lazy-loading 'resources' on Instance uuid ea685573-5d12-4d41-8c8d-1d73dc63399d obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:52:52 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [NOTICE]   (252635) : haproxy version is 2.8.14-c23fe91
Nov 29 15:52:52 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [NOTICE]   (252635) : path to executable is /usr/sbin/haproxy
Nov 29 15:52:52 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [WARNING]  (252635) : Exiting Master process...
Nov 29 15:52:52 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [WARNING]  (252635) : Exiting Master process...
Nov 29 15:52:52 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [ALERT]    (252635) : Current worker (252637) exited with code 143 (Terminated)
Nov 29 15:52:52 compute-0 neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972[252631]: [WARNING]  (252635) : All workers exited. Exiting... (0)
Nov 29 15:52:52 compute-0 systemd[1]: libpod-87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097.scope: Deactivated successfully.
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.592 189489 DEBUG nova.compute.manager [req-33c76402-be6d-4112-8564-6f5e824c65b6 req-f5768c77-1960-4940-b429-71b614ea8f90 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-unplugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.593 189489 DEBUG oslo_concurrency.lockutils [req-33c76402-be6d-4112-8564-6f5e824c65b6 req-f5768c77-1960-4940-b429-71b614ea8f90 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.593 189489 DEBUG oslo_concurrency.lockutils [req-33c76402-be6d-4112-8564-6f5e824c65b6 req-f5768c77-1960-4940-b429-71b614ea8f90 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:52 compute-0 podman[253233]: 2025-11-29 15:52:52.594369535 +0000 UTC m=+0.077890306 container died 87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.595 189489 DEBUG oslo_concurrency.lockutils [req-33c76402-be6d-4112-8564-6f5e824c65b6 req-f5768c77-1960-4940-b429-71b614ea8f90 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.596 189489 DEBUG nova.compute.manager [req-33c76402-be6d-4112-8564-6f5e824c65b6 req-f5768c77-1960-4940-b429-71b614ea8f90 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-unplugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.596 189489 DEBUG nova.compute.manager [req-33c76402-be6d-4112-8564-6f5e824c65b6 req-f5768c77-1960-4940-b429-71b614ea8f90 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-unplugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.600 189489 DEBUG nova.virt.libvirt.vif [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:50:11Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-153023418',display_name='tempest-ServerActionsTestJSON-server-153023418',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-153023418',id=9,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHe84/Vw1/UE6MjH9hSoZ8S+lF+m9Cdu9Av7vTw88OmQpmBt5taKTJ/r+cWSkzwOPRZEvDuFb+SsqaHgLTHP3NrHdnllgdosFCEIeqEnWDvyEA3QKG1liQQzPUp2/9l1bw==',key_name='tempest-keypair-106632266',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:50:26Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='79e3732a895b43ce86538671ea9e7670',ramdisk_id='',reservation_id='r-7ix6aam2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-1517137287',owner_user_name='tempest-ServerActionsTestJSON-1517137287-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:51:47Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='b595faab5dfa4b4e9aff6a34b1473172',uuid=ea685573-5d12-4d41-8c8d-1d73dc63399d,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.601 189489 DEBUG nova.network.os_vif_util [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converting VIF {"id": "471b576d-abd9-4813-915c-33fdffb4ae94", "address": "fa:16:3e:b8:50:d3", "network": {"id": "29b0dade-4512-451e-9fdc-1b8d13fd5972", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-1500630099-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.245", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "79e3732a895b43ce86538671ea9e7670", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap471b576d-ab", "ovs_interfaceid": "471b576d-abd9-4813-915c-33fdffb4ae94", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.602 189489 DEBUG nova.network.os_vif_util [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.603 189489 DEBUG os_vif [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.606 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.607 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap471b576d-ab, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.609 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.613 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.616 189489 INFO os_vif [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:b8:50:d3,bridge_name='br-int',has_traffic_filtering=True,id=471b576d-abd9-4813-915c-33fdffb4ae94,network=Network(29b0dade-4512-451e-9fdc-1b8d13fd5972),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap471b576d-ab')#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.617 189489 INFO nova.virt.libvirt.driver [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Deleting instance files /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d_del#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.622 189489 INFO nova.virt.libvirt.driver [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Deletion of /var/lib/nova/instances/ea685573-5d12-4d41-8c8d-1d73dc63399d_del complete#033[00m
Nov 29 15:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097-userdata-shm.mount: Deactivated successfully.
Nov 29 15:52:52 compute-0 systemd[1]: var-lib-containers-storage-overlay-25901e556368186a7ec910056ed497531bda0e2d0a7263f8af7701fe8ba9a24b-merged.mount: Deactivated successfully.
Nov 29 15:52:52 compute-0 podman[253233]: 2025-11-29 15:52:52.642989282 +0000 UTC m=+0.126510053 container cleanup 87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:52:52 compute-0 systemd[1]: libpod-conmon-87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097.scope: Deactivated successfully.
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.692 189489 INFO nova.compute.manager [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.693 189489 DEBUG oslo.service.loopingcall [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.693 189489 DEBUG nova.compute.manager [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.693 189489 DEBUG nova.network.neutron [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:52:52 compute-0 podman[253278]: 2025-11-29 15:52:52.727846655 +0000 UTC m=+0.057886618 container remove 87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true)
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.736 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[52cb9fb5-225c-4a60-a8b4-b1e396c1dfae]: (4, ('Sat Nov 29 03:52:52 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 (87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097)\n87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097\nSat Nov 29 03:52:52 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 (87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097)\n87987525706e3a5cc5e01618ac7f1968cde4e5ca2c04b337b9994537f4c73097\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.739 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[11208abf-3a44-479c-8065-529c2ba6a568]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.740 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap29b0dade-40, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.742 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 kernel: tap29b0dade-40: left promiscuous mode
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.764 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 nova_compute[189485]: 2025-11-29 15:52:52.767 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.768 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8a3abf2b-41a3-4ffb-9905-80649f92d5c8]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.792 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[2ddcafa5-42b9-4c60-8a72-2918fb6ba318]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.793 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8cf42b2e-b6f7-4124-b82b-dbfb595ad0e1]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.810 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[55b375a6-d765-483a-a313-d3b5389b64be]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 526293, 'reachable_time': 19746, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253293, 'error': None, 'target': 'ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.813 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-29b0dade-4512-451e-9fdc-1b8d13fd5972 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:52:52 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:52.813 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[d2b3f32d-950b-4006-9699-4a95e0b3de5a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:52:52 compute-0 systemd[1]: run-netns-ovnmeta\x2d29b0dade\x2d4512\x2d451e\x2d9fdc\x2d1b8d13fd5972.mount: Deactivated successfully.
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.710 189489 DEBUG nova.network.neutron [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.735 189489 INFO nova.compute.manager [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Took 2.04 seconds to deallocate network for instance.#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.763 189489 DEBUG nova.compute.manager [req-d6820a09-61ad-48e2-8983-75e864b12d39 req-ba47e7f6-f518-4dc5-857f-6af235f74006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.763 189489 DEBUG oslo_concurrency.lockutils [req-d6820a09-61ad-48e2-8983-75e864b12d39 req-ba47e7f6-f518-4dc5-857f-6af235f74006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.764 189489 DEBUG oslo_concurrency.lockutils [req-d6820a09-61ad-48e2-8983-75e864b12d39 req-ba47e7f6-f518-4dc5-857f-6af235f74006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.765 189489 DEBUG oslo_concurrency.lockutils [req-d6820a09-61ad-48e2-8983-75e864b12d39 req-ba47e7f6-f518-4dc5-857f-6af235f74006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.765 189489 DEBUG nova.compute.manager [req-d6820a09-61ad-48e2-8983-75e864b12d39 req-ba47e7f6-f518-4dc5-857f-6af235f74006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] No waiting events found dispatching network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.765 189489 WARNING nova.compute.manager [req-d6820a09-61ad-48e2-8983-75e864b12d39 req-ba47e7f6-f518-4dc5-857f-6af235f74006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received unexpected event network-vif-plugged-471b576d-abd9-4813-915c-33fdffb4ae94 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.780 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.781 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.895 189489 DEBUG nova.compute.provider_tree [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.920 189489 DEBUG nova.scheduler.client.report [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:52:54 compute-0 nova_compute[189485]: 2025-11-29 15:52:54.996 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.215s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:55 compute-0 nova_compute[189485]: 2025-11-29 15:52:55.024 189489 INFO nova.scheduler.client.report [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Deleted allocations for instance ea685573-5d12-4d41-8c8d-1d73dc63399d#033[00m
Nov 29 15:52:55 compute-0 nova_compute[189485]: 2025-11-29 15:52:55.103 189489 DEBUG oslo_concurrency.lockutils [None req-15e5a9e6-ab45-488d-8a19-c8c3d9f604d1 b595faab5dfa4b4e9aff6a34b1473172 79e3732a895b43ce86538671ea9e7670 - - default default] Lock "ea685573-5d12-4d41-8c8d-1d73dc63399d" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.841s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:55 compute-0 nova_compute[189485]: 2025-11-29 15:52:55.690 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:57 compute-0 nova_compute[189485]: 2025-11-29 15:52:57.013 189489 DEBUG nova.compute.manager [req-6c7b3840-e66e-4eba-83a7-ac96018bde0b req-15684e8e-42d4-40dc-bdac-c2710cfffbe5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Received event network-vif-deleted-471b576d-abd9-4813-915c-33fdffb4ae94 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:52:57 compute-0 nova_compute[189485]: 2025-11-29 15:52:57.612 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:52:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:59.211 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:52:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:59.212 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:52:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:52:59.213 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:52:59 compute-0 podman[203677]: time="2025-11-29T15:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:52:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:52:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4792 "" "Go-http-client/1.1"
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.597 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.598 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.612 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.677 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.678 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.690 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.692 189489 INFO nova.compute.claims [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.697 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.820 189489 DEBUG nova.compute.provider_tree [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.835 189489 DEBUG nova.scheduler.client.report [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.864 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.865 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.911 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.912 189489 DEBUG nova.network.neutron [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.935 189489 INFO nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:53:00 compute-0 nova_compute[189485]: 2025-11-29 15:53:00.955 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.055 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.058 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.059 189489 INFO nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Creating image(s)#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.060 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "/var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.061 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "/var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.062 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "/var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.090 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.116 189489 DEBUG nova.policy [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '6ffdcfadc95949538d09357b0b49d925', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'adde993c93894d9681ea78f0147c8a52', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.181 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.182 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.182 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.196 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.269 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.271 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.319 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.320 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.320 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.389 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.390 189489 DEBUG nova.virt.disk.api [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Checking if we can resize image /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.390 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: ERROR   15:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: ERROR   15:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: ERROR   15:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: ERROR   15:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:53:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.455 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.456 189489 DEBUG nova.virt.disk.api [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Cannot resize image /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.456 189489 DEBUG nova.objects.instance [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lazy-loading 'migration_context' on Instance uuid 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.480 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.481 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Ensure instance console log exists: /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.481 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.482 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:01 compute-0 nova_compute[189485]: 2025-11-29 15:53:01.482 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:01 compute-0 podman[253309]: 2025-11-29 15:53:01.660767329 +0000 UTC m=+0.111314275 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.license=GPLv2)
Nov 29 15:53:02 compute-0 nova_compute[189485]: 2025-11-29 15:53:02.346 189489 DEBUG nova.network.neutron [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Successfully created port: fe0e2687-2636-4247-a729-26a0e3c624a0 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:53:02 compute-0 nova_compute[189485]: 2025-11-29 15:53:02.615 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:02 compute-0 ovn_controller[97827]: 2025-11-29T15:53:02Z|00134|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:53:02 compute-0 nova_compute[189485]: 2025-11-29 15:53:02.643 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:02 compute-0 ovn_controller[97827]: 2025-11-29T15:53:02Z|00135|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:53:02 compute-0 nova_compute[189485]: 2025-11-29 15:53:02.909 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.585 189489 DEBUG nova.network.neutron [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Successfully updated port: fe0e2687-2636-4247-a729-26a0e3c624a0 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.606 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.606 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquired lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.606 189489 DEBUG nova.network.neutron [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:53:03 compute-0 podman[253334]: 2025-11-29 15:53:03.649876013 +0000 UTC m=+0.085970734 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, io.openshift.expose-services=, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Nov 29 15:53:03 compute-0 podman[253332]: 2025-11-29 15:53:03.680687451 +0000 UTC m=+0.112767044 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:53:03 compute-0 podman[253331]: 2025-11-29 15:53:03.694925354 +0000 UTC m=+0.135425443 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.702 189489 DEBUG nova.compute.manager [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-changed-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.702 189489 DEBUG nova.compute.manager [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Refreshing instance network info cache due to event network-changed-fe0e2687-2636-4247-a729-26a0e3c624a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.703 189489 DEBUG oslo_concurrency.lockutils [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:53:03 compute-0 podman[253333]: 2025-11-29 15:53:03.708218341 +0000 UTC m=+0.139751990 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 15:53:03 compute-0 podman[253330]: 2025-11-29 15:53:03.714758247 +0000 UTC m=+0.153918450 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., version=9.4, release-0.7.12=, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, name=ubi9, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_id=edpm)
Nov 29 15:53:03 compute-0 nova_compute[189485]: 2025-11-29 15:53:03.763 189489 DEBUG nova.network.neutron [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.816 189489 DEBUG nova.network.neutron [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Updating instance_info_cache with network_info: [{"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.843 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Releasing lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.843 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Instance network_info: |[{"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.844 189489 DEBUG oslo_concurrency.lockutils [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.844 189489 DEBUG nova.network.neutron [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Refreshing network info cache for port fe0e2687-2636-4247-a729-26a0e3c624a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.850 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Start _get_guest_xml network_info=[{"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.864 189489 WARNING nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.876 189489 DEBUG nova.virt.libvirt.host [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.877 189489 DEBUG nova.virt.libvirt.host [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.893 189489 DEBUG nova.virt.libvirt.host [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.894 189489 DEBUG nova.virt.libvirt.host [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.895 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.895 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.896 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.897 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.898 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.899 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.899 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.900 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.900 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.901 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.902 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.902 189489 DEBUG nova.virt.hardware [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.909 189489 DEBUG nova.virt.libvirt.vif [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1957561350',display_name='tempest-TestServerBasicOps-server-1957561350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1957561350',id=12,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1NUKtQLvUF7OdZp6tiYeKRLfsz+Nt9cU1aO0s91dgvdY4nJNMpSyly2TSvKLRn2+lzCNhuwawR/Kk2cuf6Rew+DV9gI/MN3TDcu77Sx36rOqqRNPSFHa+wNuYLRoFk0Q==',key_name='tempest-TestServerBasicOps-399626093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='adde993c93894d9681ea78f0147c8a52',ramdisk_id='',reservation_id='r-s22176y8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2084881187',owner_user_name='tempest-TestServerBasicOps-2084881187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:53:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ffdcfadc95949538d09357b0b49d925',uuid=609941f8-b5e1-4f1f-9c99-5e4bc5f5b232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.909 189489 DEBUG nova.network.os_vif_util [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Converting VIF {"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.911 189489 DEBUG nova.network.os_vif_util [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.912 189489 DEBUG nova.objects.instance [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lazy-loading 'pci_devices' on Instance uuid 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.944 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <uuid>609941f8-b5e1-4f1f-9c99-5e4bc5f5b232</uuid>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <name>instance-0000000c</name>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:name>tempest-TestServerBasicOps-server-1957561350</nova:name>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:53:04</nova:creationTime>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:user uuid="6ffdcfadc95949538d09357b0b49d925">tempest-TestServerBasicOps-2084881187-project-member</nova:user>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:project uuid="adde993c93894d9681ea78f0147c8a52">tempest-TestServerBasicOps-2084881187</nova:project>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        <nova:port uuid="fe0e2687-2636-4247-a729-26a0e3c624a0">
Nov 29 15:53:04 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.11" ipVersion="4"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <system>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <entry name="serial">609941f8-b5e1-4f1f-9c99-5e4bc5f5b232</entry>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <entry name="uuid">609941f8-b5e1-4f1f-9c99-5e4bc5f5b232</entry>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </system>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <os>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </os>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <features>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </features>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.config"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:09:15:fd"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <target dev="tapfe0e2687-26"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/console.log" append="off"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <video>
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </video>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:53:04 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:53:04 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:53:04 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:53:04 compute-0 nova_compute[189485]: </domain>
Nov 29 15:53:04 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.944 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Preparing to wait for external event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.944 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.945 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.945 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.946 189489 DEBUG nova.virt.libvirt.vif [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1957561350',display_name='tempest-TestServerBasicOps-server-1957561350',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1957561350',id=12,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1NUKtQLvUF7OdZp6tiYeKRLfsz+Nt9cU1aO0s91dgvdY4nJNMpSyly2TSvKLRn2+lzCNhuwawR/Kk2cuf6Rew+DV9gI/MN3TDcu77Sx36rOqqRNPSFHa+wNuYLRoFk0Q==',key_name='tempest-TestServerBasicOps-399626093',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='adde993c93894d9681ea78f0147c8a52',ramdisk_id='',reservation_id='r-s22176y8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-2084881187',owner_user_name='tempest-TestServerBasicOps-2084881187-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:53:00Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ffdcfadc95949538d09357b0b49d925',uuid=609941f8-b5e1-4f1f-9c99-5e4bc5f5b232,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.946 189489 DEBUG nova.network.os_vif_util [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Converting VIF {"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.947 189489 DEBUG nova.network.os_vif_util [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.948 189489 DEBUG os_vif [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.948 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.949 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.949 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.953 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.954 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapfe0e2687-26, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.955 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapfe0e2687-26, col_values=(('external_ids', {'iface-id': 'fe0e2687-2636-4247-a729-26a0e3c624a0', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:09:15:fd', 'vm-uuid': '609941f8-b5e1-4f1f-9c99-5e4bc5f5b232'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.957 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:04 compute-0 NetworkManager[56360]: <info>  [1764431584.9587] manager: (tapfe0e2687-26): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/62)
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.960 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.969 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:04 compute-0 nova_compute[189485]: 2025-11-29 15:53:04.969 189489 INFO os_vif [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26')#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.047 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.047 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.047 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] No VIF found with MAC fa:16:3e:09:15:fd, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.048 189489 INFO nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Using config drive#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.697 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.855 189489 INFO nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Creating config drive at /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.config#033[00m
Nov 29 15:53:05 compute-0 nova_compute[189485]: 2025-11-29 15:53:05.862 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplxgb02xd execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.014 189489 DEBUG oslo_concurrency.processutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmplxgb02xd" returned: 0 in 0.152s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:06 compute-0 kernel: tapfe0e2687-26: entered promiscuous mode
Nov 29 15:53:06 compute-0 ovn_controller[97827]: 2025-11-29T15:53:06Z|00136|binding|INFO|Claiming lport fe0e2687-2636-4247-a729-26a0e3c624a0 for this chassis.
Nov 29 15:53:06 compute-0 ovn_controller[97827]: 2025-11-29T15:53:06Z|00137|binding|INFO|fe0e2687-2636-4247-a729-26a0e3c624a0: Claiming fa:16:3e:09:15:fd 10.100.0.11
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.133 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 NetworkManager[56360]: <info>  [1764431586.1383] manager: (tapfe0e2687-26): new Tun device (/org/freedesktop/NetworkManager/Devices/63)
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.163 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:15:fd 10.100.0.11'], port_security=['fa:16:3e:09:15:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '609941f8-b5e1-4f1f-9c99-5e4bc5f5b232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adde993c93894d9681ea78f0147c8a52', 'neutron:revision_number': '2', 'neutron:security_group_ids': '042dc84a-c12e-4a97-8a9b-39e0fd8bf0c1 78c56b68-6630-4687-9463-d645eaec30be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba151cd-a8f1-4763-b893-b48bfff2831b, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=fe0e2687-2636-4247-a729-26a0e3c624a0) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.166 106713 INFO neutron.agent.ovn.metadata.agent [-] Port fe0e2687-2636-4247-a729-26a0e3c624a0 in datapath 539b3be1-041f-4cb0-bb96-caaac62c4d34 bound to our chassis#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.169 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 539b3be1-041f-4cb0-bb96-caaac62c4d34#033[00m
Nov 29 15:53:06 compute-0 systemd-udevd[253446]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.191 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8cc0b6a0-ad3d-45cb-a17e-546dbf78c106]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.196 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap539b3be1-01 in ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.198 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap539b3be1-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.198 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[36faaa78-f4ae-4b90-b349-0cd0ecb50535]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.200 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[e458b30b-37aa-4ec0-b2d1-ad25c9e4d727]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.215 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[cf6424dd-9294-4a2e-8c73-34cb6a6cdc44]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 systemd-machined[155802]: New machine qemu-13-instance-0000000c.
Nov 29 15:53:06 compute-0 NetworkManager[56360]: <info>  [1764431586.2263] device (tapfe0e2687-26): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:53:06 compute-0 NetworkManager[56360]: <info>  [1764431586.2277] device (tapfe0e2687-26): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.234 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Nov 29 15:53:06 compute-0 ovn_controller[97827]: 2025-11-29T15:53:06Z|00138|binding|INFO|Setting lport fe0e2687-2636-4247-a729-26a0e3c624a0 ovn-installed in OVS
Nov 29 15:53:06 compute-0 ovn_controller[97827]: 2025-11-29T15:53:06Z|00139|binding|INFO|Setting lport fe0e2687-2636-4247-a729-26a0e3c624a0 up in Southbound
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.238 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.256 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6c93f9ea-9cc3-4a14-a8e1-2ec53cb50cc0]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.305 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[eee6f04a-ca7c-4421-a75f-c52bdfcd1cfe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 systemd-udevd[253450]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:53:06 compute-0 NetworkManager[56360]: <info>  [1764431586.3187] manager: (tap539b3be1-00): new Veth device (/org/freedesktop/NetworkManager/Devices/64)
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.317 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[105369b8-d372-4b62-8ae7-68b242bdc6e8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.359 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[0e1f9f48-f8f5-4a9e-a1b0-d6158e1d92ff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.363 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[61af55c7-38f0-4dfc-b5bd-1c890a78ead7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 NetworkManager[56360]: <info>  [1764431586.3925] device (tap539b3be1-00): carrier: link connected
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.403 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[30c3a60a-ccc1-4dcd-9eee-b64580ef558e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.427 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[214faaf9-d241-4a82-9645-4cbea2f55f64]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap539b3be1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:1b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534204, 'reachable_time': 20059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253478, 'error': None, 'target': 'ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.451 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a73abf33-929d-4267-8c2d-cc7f4dab347d]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fec6:1b6a'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 534204, 'tstamp': 534204}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253479, 'error': None, 'target': 'ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.485 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[384fe8e6-44e3-49fe-bc5a-96ed0a42ebe7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap539b3be1-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:c6:1b:6a'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 41], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534204, 'reachable_time': 20059, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253480, 'error': None, 'target': 'ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.521 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8253a30c-bd46-4988-b2ae-881c9af08109]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.598 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[dfd7d397-9982-4424-9e00-70bf8b8cf9c1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.600 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap539b3be1-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.601 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.602 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap539b3be1-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.606 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 kernel: tap539b3be1-00: entered promiscuous mode
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.614 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 NetworkManager[56360]: <info>  [1764431586.6157] manager: (tap539b3be1-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/65)
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.628 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap539b3be1-00, col_values=(('external_ids', {'iface-id': 'cd8c0b10-6735-42d8-afeb-94453cdb2468'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.632 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 ovn_controller[97827]: 2025-11-29T15:53:06Z|00140|binding|INFO|Releasing lport cd8c0b10-6735-42d8-afeb-94453cdb2468 from this chassis (sb_readonly=0)
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.634 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.635 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/539b3be1-041f-4cb0-bb96-caaac62c4d34.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/539b3be1-041f-4cb0-bb96-caaac62c4d34.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.641 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[81f877d2-55c2-4975-bd1c-fa691b2efb41]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.644 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.644 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-539b3be1-041f-4cb0-bb96-caaac62c4d34
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/539b3be1-041f-4cb0-bb96-caaac62c4d34.pid.haproxy
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 539b3be1-041f-4cb0-bb96-caaac62c4d34
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:53:06 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:06.646 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'env', 'PROCESS_TAG=haproxy-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/539b3be1-041f-4cb0-bb96-caaac62c4d34.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.684 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431586.683344, 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.685 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] VM Started (Lifecycle Event)#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.710 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.717 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431586.683468, 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.717 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.743 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.748 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:53:06 compute-0 nova_compute[189485]: 2025-11-29 15:53:06.778 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:53:07 compute-0 podman[253517]: 2025-11-29 15:53:07.154725369 +0000 UTC m=+0.075465541 container create d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:53:07 compute-0 systemd[1]: Started libpod-conmon-d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00.scope.
Nov 29 15:53:07 compute-0 podman[253517]: 2025-11-29 15:53:07.120060657 +0000 UTC m=+0.040800849 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:53:07 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:53:07 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/88e8482d023e56b912c84b171dd963a217695da6d47e0e7c443155a0c5b77bc7/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:53:07 compute-0 podman[253517]: 2025-11-29 15:53:07.263118804 +0000 UTC m=+0.183859026 container init d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:53:07 compute-0 nova_compute[189485]: 2025-11-29 15:53:07.265 189489 DEBUG nova.network.neutron [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Updated VIF entry in instance network info cache for port fe0e2687-2636-4247-a729-26a0e3c624a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:53:07 compute-0 nova_compute[189485]: 2025-11-29 15:53:07.266 189489 DEBUG nova.network.neutron [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Updating instance_info_cache with network_info: [{"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:53:07 compute-0 podman[253517]: 2025-11-29 15:53:07.277957953 +0000 UTC m=+0.198698135 container start d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:53:07 compute-0 nova_compute[189485]: 2025-11-29 15:53:07.300 189489 DEBUG oslo_concurrency.lockutils [req-d5f6e21e-1d11-45d2-8961-c5b56c2c5c34 req-924064d1-3a9c-41b7-b2f3-9feff1ae7a0c 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:53:07 compute-0 neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34[253530]: [NOTICE]   (253534) : New worker (253536) forked
Nov 29 15:53:07 compute-0 neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34[253530]: [NOTICE]   (253534) : Loading success.
Nov 29 15:53:07 compute-0 nova_compute[189485]: 2025-11-29 15:53:07.538 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431572.537212, ea685573-5d12-4d41-8c8d-1d73dc63399d => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:53:07 compute-0 nova_compute[189485]: 2025-11-29 15:53:07.539 189489 INFO nova.compute.manager [-] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:53:07 compute-0 nova_compute[189485]: 2025-11-29 15:53:07.561 189489 DEBUG nova.compute.manager [None req-49e78de5-bb86-4554-9107-3e61ec87dfa9 - - - - - -] [instance: ea685573-5d12-4d41-8c8d-1d73dc63399d] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.089 189489 DEBUG nova.compute.manager [req-56f3d1e4-d2b5-4d13-8524-b2452e230f7c req-adf5f276-eacf-4fb2-9efc-905146e8a01f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.091 189489 DEBUG oslo_concurrency.lockutils [req-56f3d1e4-d2b5-4d13-8524-b2452e230f7c req-adf5f276-eacf-4fb2-9efc-905146e8a01f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.093 189489 DEBUG oslo_concurrency.lockutils [req-56f3d1e4-d2b5-4d13-8524-b2452e230f7c req-adf5f276-eacf-4fb2-9efc-905146e8a01f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.094 189489 DEBUG oslo_concurrency.lockutils [req-56f3d1e4-d2b5-4d13-8524-b2452e230f7c req-adf5f276-eacf-4fb2-9efc-905146e8a01f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.095 189489 DEBUG nova.compute.manager [req-56f3d1e4-d2b5-4d13-8524-b2452e230f7c req-adf5f276-eacf-4fb2-9efc-905146e8a01f 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Processing event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.097 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.103 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431588.1025505, 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.104 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.108 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.115 189489 INFO nova.virt.libvirt.driver [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Instance spawned successfully.#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.117 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.123 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.137 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.149 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.150 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.151 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.152 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.152 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.153 189489 DEBUG nova.virt.libvirt.driver [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.186 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.245 189489 INFO nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Took 7.19 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.246 189489 DEBUG nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.322 189489 INFO nova.compute.manager [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Took 7.67 seconds to build instance.#033[00m
Nov 29 15:53:08 compute-0 nova_compute[189485]: 2025-11-29 15:53:08.343 189489 DEBUG oslo_concurrency.lockutils [None req-5cf91e6d-3cbf-4080-b569-53ab08b3c030 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.746s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:08 compute-0 podman[253545]: 2025-11-29 15:53:08.702529164 +0000 UTC m=+0.133215404 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd)
Nov 29 15:53:09 compute-0 nova_compute[189485]: 2025-11-29 15:53:09.958 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.373 189489 DEBUG nova.compute.manager [req-5e61b55b-557b-4bec-8635-94a229a5403b req-68a39161-0770-4001-94ca-b4355a55d7a5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.374 189489 DEBUG oslo_concurrency.lockutils [req-5e61b55b-557b-4bec-8635-94a229a5403b req-68a39161-0770-4001-94ca-b4355a55d7a5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.374 189489 DEBUG oslo_concurrency.lockutils [req-5e61b55b-557b-4bec-8635-94a229a5403b req-68a39161-0770-4001-94ca-b4355a55d7a5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.374 189489 DEBUG oslo_concurrency.lockutils [req-5e61b55b-557b-4bec-8635-94a229a5403b req-68a39161-0770-4001-94ca-b4355a55d7a5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.374 189489 DEBUG nova.compute.manager [req-5e61b55b-557b-4bec-8635-94a229a5403b req-68a39161-0770-4001-94ca-b4355a55d7a5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] No waiting events found dispatching network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.375 189489 WARNING nova.compute.manager [req-5e61b55b-557b-4bec-8635-94a229a5403b req-68a39161-0770-4001-94ca-b4355a55d7a5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received unexpected event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:53:10 compute-0 podman[253564]: 2025-11-29 15:53:10.697395083 +0000 UTC m=+0.137118659 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:53:10 compute-0 nova_compute[189485]: 2025-11-29 15:53:10.700 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:11 compute-0 nova_compute[189485]: 2025-11-29 15:53:11.645 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:11 compute-0 NetworkManager[56360]: <info>  [1764431591.6479] manager: (patch-br-int-to-provnet-902f0f77-8c45-4eff-be74-67c45c992175): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/66)
Nov 29 15:53:11 compute-0 NetworkManager[56360]: <info>  [1764431591.6555] manager: (patch-provnet-902f0f77-8c45-4eff-be74-67c45c992175-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Nov 29 15:53:11 compute-0 nova_compute[189485]: 2025-11-29 15:53:11.846 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:11 compute-0 ovn_controller[97827]: 2025-11-29T15:53:11Z|00141|binding|INFO|Releasing lport cd8c0b10-6735-42d8-afeb-94453cdb2468 from this chassis (sb_readonly=0)
Nov 29 15:53:11 compute-0 ovn_controller[97827]: 2025-11-29T15:53:11Z|00142|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:53:11 compute-0 nova_compute[189485]: 2025-11-29 15:53:11.874 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:12 compute-0 nova_compute[189485]: 2025-11-29 15:53:12.654 189489 DEBUG nova.compute.manager [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-changed-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:53:12 compute-0 nova_compute[189485]: 2025-11-29 15:53:12.655 189489 DEBUG nova.compute.manager [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Refreshing instance network info cache due to event network-changed-fe0e2687-2636-4247-a729-26a0e3c624a0. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:53:12 compute-0 nova_compute[189485]: 2025-11-29 15:53:12.655 189489 DEBUG oslo_concurrency.lockutils [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:53:12 compute-0 nova_compute[189485]: 2025-11-29 15:53:12.655 189489 DEBUG oslo_concurrency.lockutils [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:53:12 compute-0 nova_compute[189485]: 2025-11-29 15:53:12.655 189489 DEBUG nova.network.neutron [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Refreshing network info cache for port fe0e2687-2636-4247-a729-26a0e3c624a0 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:53:14 compute-0 nova_compute[189485]: 2025-11-29 15:53:14.308 189489 DEBUG nova.network.neutron [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Updated VIF entry in instance network info cache for port fe0e2687-2636-4247-a729-26a0e3c624a0. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:53:14 compute-0 nova_compute[189485]: 2025-11-29 15:53:14.309 189489 DEBUG nova.network.neutron [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Updating instance_info_cache with network_info: [{"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:53:14 compute-0 nova_compute[189485]: 2025-11-29 15:53:14.328 189489 DEBUG oslo_concurrency.lockutils [req-5eb97eca-fd3c-45a5-8fd9-75e061d30c3c req-2f095bf2-c07d-4588-a599-00d84cf57a61 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:53:14 compute-0 nova_compute[189485]: 2025-11-29 15:53:14.961 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:15 compute-0 nova_compute[189485]: 2025-11-29 15:53:15.703 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:17 compute-0 nova_compute[189485]: 2025-11-29 15:53:17.509 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:19 compute-0 nova_compute[189485]: 2025-11-29 15:53:19.963 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:20 compute-0 nova_compute[189485]: 2025-11-29 15:53:20.707 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.487 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.488 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.787 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.788 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.789 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:53:21 compute-0 nova_compute[189485]: 2025-11-29 15:53:21.790 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:53:22 compute-0 podman[253589]: 2025-11-29 15:53:22.692865199 +0000 UTC m=+0.133684486 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.167 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.199 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.200 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.202 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.487 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.525 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.637 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.698 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.699 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.812 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.821 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.906 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.907 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:53:24 compute-0 nova_compute[189485]: 2025-11-29 15:53:24.968 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.004 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.387 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.388 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=72.27693939208984GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.389 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.390 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.505 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.506 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.506 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.507 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.574 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.588 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.606 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.607 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.217s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:25 compute-0 nova_compute[189485]: 2025-11-29 15:53:25.709 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:26 compute-0 nova_compute[189485]: 2025-11-29 15:53:26.604 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:27 compute-0 nova_compute[189485]: 2025-11-29 15:53:27.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:29 compute-0 nova_compute[189485]: 2025-11-29 15:53:29.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:29 compute-0 nova_compute[189485]: 2025-11-29 15:53:29.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:53:29 compute-0 podman[203677]: time="2025-11-29T15:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:53:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:53:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5258 "" "Go-http-client/1.1"
Nov 29 15:53:29 compute-0 nova_compute[189485]: 2025-11-29 15:53:29.972 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:30 compute-0 nova_compute[189485]: 2025-11-29 15:53:30.711 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: ERROR   15:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: ERROR   15:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: ERROR   15:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: ERROR   15:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:53:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:53:32 compute-0 podman[253627]: 2025-11-29 15:53:32.70024511 +0000 UTC m=+0.139024040 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 15:53:34 compute-0 podman[253657]: 2025-11-29 15:53:34.703959897 +0000 UTC m=+0.134377945 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent)
Nov 29 15:53:34 compute-0 podman[253670]: 2025-11-29 15:53:34.705148109 +0000 UTC m=+0.116293979 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal)
Nov 29 15:53:34 compute-0 podman[253658]: 2025-11-29 15:53:34.719444933 +0000 UTC m=+0.126077051 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 15:53:34 compute-0 podman[253656]: 2025-11-29 15:53:34.728589409 +0000 UTC m=+0.160353923 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, architecture=x86_64, config_id=edpm, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible)
Nov 29 15:53:34 compute-0 podman[253659]: 2025-11-29 15:53:34.755289357 +0000 UTC m=+0.154666330 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:53:34 compute-0 nova_compute[189485]: 2025-11-29 15:53:34.976 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:35 compute-0 nova_compute[189485]: 2025-11-29 15:53:35.715 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:39 compute-0 podman[253748]: 2025-11-29 15:53:39.695886024 +0000 UTC m=+0.134365415 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 15:53:39 compute-0 nova_compute[189485]: 2025-11-29 15:53:39.978 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:40 compute-0 nova_compute[189485]: 2025-11-29 15:53:40.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:53:40 compute-0 nova_compute[189485]: 2025-11-29 15:53:40.717 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:41 compute-0 podman[253780]: 2025-11-29 15:53:41.630155293 +0000 UTC m=+0.078091832 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:53:43 compute-0 ovn_controller[97827]: 2025-11-29T15:53:43Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:09:15:fd 10.100.0.11
Nov 29 15:53:43 compute-0 ovn_controller[97827]: 2025-11-29T15:53:43Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:09:15:fd 10.100.0.11
Nov 29 15:53:44 compute-0 nova_compute[189485]: 2025-11-29 15:53:44.981 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:45 compute-0 nova_compute[189485]: 2025-11-29 15:53:45.719 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:49 compute-0 nova_compute[189485]: 2025-11-29 15:53:49.984 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:50 compute-0 nova_compute[189485]: 2025-11-29 15:53:50.076 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:50 compute-0 nova_compute[189485]: 2025-11-29 15:53:50.670 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:50 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:50.672 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:53:50 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:50.675 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:53:50 compute-0 nova_compute[189485]: 2025-11-29 15:53:50.723 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:53 compute-0 podman[253806]: 2025-11-29 15:53:53.718994141 +0000 UTC m=+0.148162346 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:53:54 compute-0 nova_compute[189485]: 2025-11-29 15:53:54.989 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:55 compute-0 nova_compute[189485]: 2025-11-29 15:53:55.733 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:56 compute-0 nova_compute[189485]: 2025-11-29 15:53:56.752 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:53:57 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:57.679 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:53:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:59.212 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:53:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:59.213 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:53:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:53:59.214 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:53:59 compute-0 podman[203677]: time="2025-11-29T15:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:53:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:53:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5258 "" "Go-http-client/1.1"
Nov 29 15:53:59 compute-0 nova_compute[189485]: 2025-11-29 15:53:59.993 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:00 compute-0 nova_compute[189485]: 2025-11-29 15:54:00.736 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.061 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.061 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.061 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.062 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.064 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.067 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.069 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.070 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: ERROR   15:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: ERROR   15:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: ERROR   15:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: ERROR   15:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:54:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.952 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 2085 Content-Type: application/json Date: Sat, 29 Nov 2025 15:54:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c358cb20-d23d-4b11-900c-7b8f25158743 x-openstack-request-id: req-c358cb20-d23d-4b11-900c-7b8f25158743 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.952 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232", "name": "tempest-TestServerBasicOps-server-1957561350", "status": "ACTIVE", "tenant_id": "adde993c93894d9681ea78f0147c8a52", "user_id": "6ffdcfadc95949538d09357b0b49d925", "metadata": {"meta1": "data1", "meta2": "data2", "metaN": "dataN"}, "hostId": "0c6f0edc3952e88c93c524c069838216dffa1b3af6dd4e5b65662386", "image": {"id": "6a931c3a-089f-4276-ac71-a0da3ffce7c7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/6a931c3a-089f-4276-ac71-a0da3ffce7c7"}]}, "flavor": {"id": "cde1daa0-956a-446c-a1eb-2046e0cd1fa7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cde1daa0-956a-446c-a1eb-2046e0cd1fa7"}]}, "created": "2025-11-29T15:52:58Z", "updated": "2025-11-29T15:53:08Z", "addresses": {"tempest-TestServerBasicOps-1633809176-network": [{"version": 4, "addr": "10.100.0.11", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:09:15:fd"}, {"version": 4, "addr": "192.168.122.218", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:09:15:fd"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-TestServerBasicOps-399626093", "OS-SRV-USG:launched_at": "2025-11-29T15:53:08.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--2129099579"}, {"name": "tempest-secgroup-smoke-1201886076"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000c", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.953 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 used request id req-c358cb20-d23d-4b11-900c-7b8f25158743 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.954 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '609941f8-b5e1-4f1f-9c99-5e4bc5f5b232', 'name': 'tempest-TestServerBasicOps-server-1957561350', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000c', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'adde993c93894d9681ea78f0147c8a52', 'user_id': '6ffdcfadc95949538d09357b0b49d925', 'hostId': '0c6f0edc3952e88c93c524c069838216dffa1b3af6dd4e5b65662386', 'status': 'active', 'metadata': {'meta1': 'data1', 'meta2': 'data2', 'metaN': 'dataN'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.955 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.955 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.955 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.955 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.956 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:54:01.955459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.961 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.965 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 / tapfe0e2687-26 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.966 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.967 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.967 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.967 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.967 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.967 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.968 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.968 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:54:01.967317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:01.970 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:54:01.968889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.004 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: 43.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.038 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/memory.usage volume: 46.875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.039 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.039 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.039 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.040 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 1262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.040 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.incoming.bytes volume: 1796 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1957561350>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1957561350>]
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.042 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.042 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 1172 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.042 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.043 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.044 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.045 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.045 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:54:02.039864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-29T15:54:02.041302) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:54:02.042255) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.045 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:54:02.043534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:54:02.045246) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.103 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.104 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.171 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.read.bytes volume: 30755328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.172 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.173 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.173 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.174 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.175 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.176 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:54:02.174403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.176 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.177 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:54:02.177509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.202 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.203 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.227 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.228 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.229 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.230 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.231 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 122080000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.231 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/cpu volume: 33160000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:54:02.230369) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.232 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.232 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.233 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.233 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.233 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.233 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.234 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 535968866 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.234 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 56326732 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.235 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.read.latency volume: 686273258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.235 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.read.latency volume: 60963070 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.236 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.237 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.237 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.237 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.237 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.238 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.238 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-TestServerBasicOps-server-1957561350>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-TestServerBasicOps-server-1957561350>]
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.238 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.239 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:54:02.233632) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-29T15:54:02.237637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.240 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:54:02.239799) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.240 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.241 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.read.requests volume: 1110 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.241 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.243 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.243 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.243 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.243 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.244 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.244 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.245 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.245 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.246 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.246 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.246 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.247 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:54:02.243629) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.247 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.247 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.247 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.247 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.248 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.248 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.write.bytes volume: 72962048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.249 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.250 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.250 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:54:02.247588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.250 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.251 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.251 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.251 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.251 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 8782275504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.252 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.252 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.write.latency volume: 7526229727 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.253 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.253 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.254 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:54:02.251413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.254 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.254 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.254 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.255 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.255 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.255 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.255 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.256 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.256 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.257 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.257 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.257 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.257 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.258 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 308 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.258 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:54:02.255102) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.259 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:54:02.257794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.259 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.write.requests volume: 302 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.259 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.260 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.261 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.261 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.261 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.261 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:54:02.261608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.262 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.262 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.263 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.263 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.264 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.264 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.265 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.265 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:54:02.264283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.266 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.266 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.267 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:54:02.266893) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.266 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.267 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.267 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.267 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.allocation volume: 30351360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.268 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.268 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.269 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.269 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.269 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.270 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.270 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.270 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 8 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.270 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.270 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.271 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.271 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.271 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.271 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.271 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.272 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.273 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.274 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.274 14 DEBUG ceilometer.compute.pollsters [-] 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:54:02.270104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:54:02.271516) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:54:02.272583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:54:02.273960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.275 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.276 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.277 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:54:02.278 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:54:03 compute-0 podman[253829]: 2025-11-29 15:54:03.661556788 +0000 UTC m=+0.114462869 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Nov 29 15:54:04 compute-0 nova_compute[189485]: 2025-11-29 15:54:04.996 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.085 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.086 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.119 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.228 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.229 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.242 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.242 189489 INFO nova.compute.claims [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.429 189489 DEBUG nova.compute.provider_tree [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.457 189489 DEBUG nova.scheduler.client.report [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.485 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.256s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.486 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.542 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.542 189489 DEBUG nova.network.neutron [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.560 189489 INFO nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.579 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:54:05 compute-0 podman[253850]: 2025-11-29 15:54:05.646303304 +0000 UTC m=+0.077432733 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2)
Nov 29 15:54:05 compute-0 podman[253852]: 2025-11-29 15:54:05.665445419 +0000 UTC m=+0.087104943 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, vcs-type=git, version=9.6, config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Nov 29 15:54:05 compute-0 podman[253848]: 2025-11-29 15:54:05.677179344 +0000 UTC m=+0.116961746 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, vcs-type=git, version=9.4)
Nov 29 15:54:05 compute-0 podman[253849]: 2025-11-29 15:54:05.683871534 +0000 UTC m=+0.117366037 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 15:54:05 compute-0 podman[253851]: 2025-11-29 15:54:05.702070304 +0000 UTC m=+0.130990595 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.738 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.898 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.899 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.900 189489 INFO nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Creating image(s)#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.901 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "/var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.902 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "/var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.903 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "/var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.916 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.976 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.977 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.981 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:05 compute-0 nova_compute[189485]: 2025-11-29 15:54:05.996 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.065 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.065 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.107 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.108 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.127s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.109 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.126 189489 DEBUG nova.policy [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '08fa71399ec746088caaa6ce113cf5bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aac53958ac1141be8c52323cdbc3e956', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.162 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.163 189489 DEBUG nova.virt.disk.api [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Checking if we can resize image /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.163 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.220 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.221 189489 DEBUG nova.virt.disk.api [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Cannot resize image /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.221 189489 DEBUG nova.objects.instance [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lazy-loading 'migration_context' on Instance uuid f8649788-26c9-4497-a517-f989c3c9cdb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.239 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.239 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Ensure instance console log exists: /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.244 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.245 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.245 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:06 compute-0 nova_compute[189485]: 2025-11-29 15:54:06.781 189489 DEBUG nova.network.neutron [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Successfully created port: bc8a9aec-d49d-411d-8b11-6c05461f6ed4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.240 189489 DEBUG nova.network.neutron [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Successfully updated port: bc8a9aec-d49d-411d-8b11-6c05461f6ed4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.275 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.275 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquired lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.275 189489 DEBUG nova.network.neutron [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.343 189489 DEBUG nova.compute.manager [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-changed-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.343 189489 DEBUG nova.compute.manager [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Refreshing instance network info cache due to event network-changed-bc8a9aec-d49d-411d-8b11-6c05461f6ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.343 189489 DEBUG oslo_concurrency.lockutils [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:54:08 compute-0 nova_compute[189485]: 2025-11-29 15:54:08.451 189489 DEBUG nova.network.neutron [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.952 189489 DEBUG nova.network.neutron [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updating instance_info_cache with network_info: [{"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.972 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Releasing lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.973 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Instance network_info: |[{"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.973 189489 DEBUG oslo_concurrency.lockutils [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.973 189489 DEBUG nova.network.neutron [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Refreshing network info cache for port bc8a9aec-d49d-411d-8b11-6c05461f6ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.976 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Start _get_guest_xml network_info=[{"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:54:09 compute-0 nova_compute[189485]: 2025-11-29 15:54:09.997 189489 WARNING nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.001 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.006 189489 DEBUG nova.virt.libvirt.host [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.007 189489 DEBUG nova.virt.libvirt.host [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.011 189489 DEBUG nova.virt.libvirt.host [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.011 189489 DEBUG nova.virt.libvirt.host [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.012 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.012 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.013 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.013 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.013 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.014 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.014 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.014 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.015 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.015 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.015 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.016 189489 DEBUG nova.virt.hardware [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.019 189489 DEBUG nova.virt.libvirt.vif [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:54:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1911938473',display_name='tempest-TestNetworkBasicOps-server-1911938473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1911938473',id=13,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLQHqtawrjL2wQM17CQzJFmBeXoduG4angmB0jo9/RQYpY+v/NgXODpz5JsRknVFMlKfiC+y5ptrvfJjydPALtpgesZrfIdXd90qxXP6XvXJafN6f5SdFPOHokIZP8lIqQ==',key_name='tempest-TestNetworkBasicOps-1298186890',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aac53958ac1141be8c52323cdbc3e956',ramdisk_id='',reservation_id='r-n125kngd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-729114730',owner_user_name='tempest-TestNetworkBasicOps-729114730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:54:05Z,user_data=None,user_id='08fa71399ec746088caaa6ce113cf5bc',uuid=f8649788-26c9-4497-a517-f989c3c9cdb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.020 189489 DEBUG nova.network.os_vif_util [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converting VIF {"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.020 189489 DEBUG nova.network.os_vif_util [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.021 189489 DEBUG nova.objects.instance [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lazy-loading 'pci_devices' on Instance uuid f8649788-26c9-4497-a517-f989c3c9cdb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.037 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <uuid>f8649788-26c9-4497-a517-f989c3c9cdb7</uuid>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <name>instance-0000000d</name>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:name>tempest-TestNetworkBasicOps-server-1911938473</nova:name>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:54:09</nova:creationTime>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:user uuid="08fa71399ec746088caaa6ce113cf5bc">tempest-TestNetworkBasicOps-729114730-project-member</nova:user>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:project uuid="aac53958ac1141be8c52323cdbc3e956">tempest-TestNetworkBasicOps-729114730</nova:project>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        <nova:port uuid="bc8a9aec-d49d-411d-8b11-6c05461f6ed4">
Nov 29 15:54:10 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.10" ipVersion="4"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <system>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <entry name="serial">f8649788-26c9-4497-a517-f989c3c9cdb7</entry>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <entry name="uuid">f8649788-26c9-4497-a517-f989c3c9cdb7</entry>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </system>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <os>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </os>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <features>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </features>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.config"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:7e:5f:3b"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <target dev="tapbc8a9aec-d4"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/console.log" append="off"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <video>
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </video>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:54:10 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:54:10 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:54:10 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:54:10 compute-0 nova_compute[189485]: </domain>
Nov 29 15:54:10 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.039 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Preparing to wait for external event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.040 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.040 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.041 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.042 189489 DEBUG nova.virt.libvirt.vif [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:54:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1911938473',display_name='tempest-TestNetworkBasicOps-server-1911938473',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1911938473',id=13,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLQHqtawrjL2wQM17CQzJFmBeXoduG4angmB0jo9/RQYpY+v/NgXODpz5JsRknVFMlKfiC+y5ptrvfJjydPALtpgesZrfIdXd90qxXP6XvXJafN6f5SdFPOHokIZP8lIqQ==',key_name='tempest-TestNetworkBasicOps-1298186890',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aac53958ac1141be8c52323cdbc3e956',ramdisk_id='',reservation_id='r-n125kngd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-729114730',owner_user_name='tempest-TestNetworkBasicOps-729114730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:54:05Z,user_data=None,user_id='08fa71399ec746088caaa6ce113cf5bc',uuid=f8649788-26c9-4497-a517-f989c3c9cdb7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.042 189489 DEBUG nova.network.os_vif_util [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converting VIF {"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.043 189489 DEBUG nova.network.os_vif_util [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.044 189489 DEBUG os_vif [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.045 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.046 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.047 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.051 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.052 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapbc8a9aec-d4, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.053 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapbc8a9aec-d4, col_values=(('external_ids', {'iface-id': 'bc8a9aec-d49d-411d-8b11-6c05461f6ed4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:7e:5f:3b', 'vm-uuid': 'f8649788-26c9-4497-a517-f989c3c9cdb7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.056 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:10 compute-0 NetworkManager[56360]: <info>  [1764431650.0581] manager: (tapbc8a9aec-d4): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.060 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.066 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.067 189489 INFO os_vif [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4')#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.116 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.117 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.117 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] No VIF found with MAC fa:16:3e:7e:5f:3b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.118 189489 INFO nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Using config drive#033[00m
Nov 29 15:54:10 compute-0 podman[253955]: 2025-11-29 15:54:10.132001169 +0000 UTC m=+0.134201370 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.739 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.887 189489 INFO nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Creating config drive at /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.config#033[00m
Nov 29 15:54:10 compute-0 nova_compute[189485]: 2025-11-29 15:54:10.891 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68rgjtal execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.013 189489 DEBUG oslo_concurrency.processutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp68rgjtal" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:11 compute-0 NetworkManager[56360]: <info>  [1764431651.0946] manager: (tapbc8a9aec-d4): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Nov 29 15:54:11 compute-0 kernel: tapbc8a9aec-d4: entered promiscuous mode
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.104 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:11 compute-0 ovn_controller[97827]: 2025-11-29T15:54:11Z|00143|binding|INFO|Claiming lport bc8a9aec-d49d-411d-8b11-6c05461f6ed4 for this chassis.
Nov 29 15:54:11 compute-0 ovn_controller[97827]: 2025-11-29T15:54:11Z|00144|binding|INFO|bc8a9aec-d49d-411d-8b11-6c05461f6ed4: Claiming fa:16:3e:7e:5f:3b 10.100.0.10
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.114 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:5f:3b 10.100.0.10'], port_security=['fa:16:3e:7e:5f:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f8649788-26c9-4497-a517-f989c3c9cdb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aac53958ac1141be8c52323cdbc3e956', 'neutron:revision_number': '2', 'neutron:security_group_ids': '6406711a-fc6c-4239-9b58-d82b897202ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32ea6e1f-12a5-46ef-82e5-118dabc8eb05, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=bc8a9aec-d49d-411d-8b11-6c05461f6ed4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.115 106713 INFO neutron.agent.ovn.metadata.agent [-] Port bc8a9aec-d49d-411d-8b11-6c05461f6ed4 in datapath 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b bound to our chassis#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.118 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.131 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[6464265e-9c37-4071-a6fc-a01e9d4061fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.132 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9b5208cc-e1 in ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.135 239830 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9b5208cc-e0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.135 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[b0aa6b87-df15-41f6-92f9-f3b021e5d70d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_controller[97827]: 2025-11-29T15:54:11Z|00145|binding|INFO|Setting lport bc8a9aec-d49d-411d-8b11-6c05461f6ed4 ovn-installed in OVS
Nov 29 15:54:11 compute-0 ovn_controller[97827]: 2025-11-29T15:54:11Z|00146|binding|INFO|Setting lport bc8a9aec-d49d-411d-8b11-6c05461f6ed4 up in Southbound
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.136 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[556c5ffd-6c5e-4cc1-b78d-79ffac6ce87d]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.138 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.143 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.150 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[112b3a43-fc19-492b-9a30-1d9a7de3689a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 systemd-machined[155802]: New machine qemu-14-instance-0000000d.
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.166 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[34f1e1c8-ac24-43cf-9069-07454275e55a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 systemd-udevd[253997]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:54:11 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Nov 29 15:54:11 compute-0 NetworkManager[56360]: <info>  [1764431651.1883] device (tapbc8a9aec-d4): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:54:11 compute-0 NetworkManager[56360]: <info>  [1764431651.1948] device (tapbc8a9aec-d4): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.218 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[b96af921-16bf-4fa7-bd71-b08fc48fe923]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.225 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d467c18f-5d47-4bfa-8b1f-c307b255b988]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 systemd-udevd[254001]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:54:11 compute-0 NetworkManager[56360]: <info>  [1764431651.2285] manager: (tap9b5208cc-e0): new Veth device (/org/freedesktop/NetworkManager/Devices/70)
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.262 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[3a78e714-727d-4e00-886a-4677e191c53d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.267 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[d76b1213-1977-4f07-852e-119fa3c21884]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 NetworkManager[56360]: <info>  [1764431651.2906] device (tap9b5208cc-e0): carrier: link connected
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.296 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[8bcc4ffa-7437-442f-b489-d12587495117]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.317 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ddfba741-2757-47ea-90a0-048f2b426ef6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b5208cc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:79:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540694, 'reachable_time': 41561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254027, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.333 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[eac4cf9f-db21-4faa-a7e1-0eab85190ecb]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe06:7997'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540694, 'tstamp': 540694}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254028, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.350 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[09b281cc-2a37-43fc-b790-8547a26bdb12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b5208cc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:79:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540694, 'reachable_time': 41561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 254029, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.379 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a5fb1df4-15cf-4f91-9698-427e4553b178]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.439 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[572a6857-a6ec-4bdb-975e-0cea5b77678e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.440 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b5208cc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.440 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.441 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b5208cc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.443 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:11 compute-0 NetworkManager[56360]: <info>  [1764431651.4449] manager: (tap9b5208cc-e0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/71)
Nov 29 15:54:11 compute-0 kernel: tap9b5208cc-e0: entered promiscuous mode
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.446 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.447 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b5208cc-e0, col_values=(('external_ids', {'iface-id': '4b21e6be-af46-463f-9bba-3aa8bb5c67fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:11 compute-0 ovn_controller[97827]: 2025-11-29T15:54:11Z|00147|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.471 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.472 106713 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9b5208cc-e5fa-4a99-99d7-6c6537b56a0b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9b5208cc-e5fa-4a99-99d7-6c6537b56a0b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.473 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[299e422e-4699-47ba-98f1-c525ac75bc9b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.475 106713 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: global
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    log         /dev/log local0 debug
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    log-tag     haproxy-metadata-proxy-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    user        root
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    group       root
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    maxconn     1024
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    pidfile     /var/lib/neutron/external/pids/9b5208cc-e5fa-4a99-99d7-6c6537b56a0b.pid.haproxy
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    daemon
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: defaults
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    log global
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    mode http
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    option httplog
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    option dontlognull
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    option http-server-close
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    option forwardfor
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    retries                 3
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    timeout http-request    30s
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    timeout connect         30s
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    timeout client          32s
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    timeout server          32s
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    timeout http-keep-alive 30s
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: listen listener
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    bind 169.254.169.254:80
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    server metadata /var/lib/neutron/metadata_proxy
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]:    http-request add-header X-OVN-Network-ID 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Nov 29 15:54:11 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:11.477 106713 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'env', 'PROCESS_TAG=haproxy-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9b5208cc-e5fa-4a99-99d7-6c6537b56a0b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.736 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431651.7356815, f8649788-26c9-4497-a517-f989c3c9cdb7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.736 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] VM Started (Lifecycle Event)#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.766 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.774 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431651.7357929, f8649788-26c9-4497-a517-f989c3c9cdb7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.774 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.800 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.805 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:54:11 compute-0 nova_compute[189485]: 2025-11-29 15:54:11.831 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:54:11 compute-0 podman[254066]: 2025-11-29 15:54:11.939483398 +0000 UTC m=+0.084157154 container create 35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:54:11 compute-0 podman[254066]: 2025-11-29 15:54:11.894382825 +0000 UTC m=+0.039056661 image pull c64a92d8e8fa4f5fb5baf11a4a693a964be3868fb7e72462c6e612c604f8d071 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Nov 29 15:54:11 compute-0 systemd[1]: Started libpod-conmon-35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b.scope.
Nov 29 15:54:12 compute-0 systemd[1]: Started libcrun container.
Nov 29 15:54:12 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/f8583d1591c6ddab750aff0a60473c25f23807332d9ccac6a64e9a81bd135267/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Nov 29 15:54:12 compute-0 podman[254066]: 2025-11-29 15:54:12.044937444 +0000 UTC m=+0.189611240 container init 35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.050 189489 DEBUG nova.network.neutron [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updated VIF entry in instance network info cache for port bc8a9aec-d49d-411d-8b11-6c05461f6ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.051 189489 DEBUG nova.network.neutron [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updating instance_info_cache with network_info: [{"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:12 compute-0 podman[254066]: 2025-11-29 15:54:12.065820165 +0000 UTC m=+0.210493921 container start 35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.067 189489 DEBUG oslo_concurrency.lockutils [req-c60d80b8-e675-4d8c-a72a-1069335ceda5 req-80fd15e5-226c-4993-8b3b-a36f382cae89 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:54:12 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [NOTICE]   (254108) : New worker (254111) forked
Nov 29 15:54:12 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [NOTICE]   (254108) : Loading success.
Nov 29 15:54:12 compute-0 podman[254078]: 2025-11-29 15:54:12.086597494 +0000 UTC m=+0.108045736 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.091 189489 DEBUG nova.compute.manager [req-10f2670a-b975-48f8-b2d9-dee02024dcdf req-84c36dfb-61fe-4e9c-b07a-90d517454134 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.091 189489 DEBUG oslo_concurrency.lockutils [req-10f2670a-b975-48f8-b2d9-dee02024dcdf req-84c36dfb-61fe-4e9c-b07a-90d517454134 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.092 189489 DEBUG oslo_concurrency.lockutils [req-10f2670a-b975-48f8-b2d9-dee02024dcdf req-84c36dfb-61fe-4e9c-b07a-90d517454134 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.092 189489 DEBUG oslo_concurrency.lockutils [req-10f2670a-b975-48f8-b2d9-dee02024dcdf req-84c36dfb-61fe-4e9c-b07a-90d517454134 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.092 189489 DEBUG nova.compute.manager [req-10f2670a-b975-48f8-b2d9-dee02024dcdf req-84c36dfb-61fe-4e9c-b07a-90d517454134 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Processing event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.093 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.098 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431652.097894, f8649788-26c9-4497-a517-f989c3c9cdb7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.100 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.103 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.108 189489 INFO nova.virt.libvirt.driver [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Instance spawned successfully.#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.109 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.126 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.140 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.146 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.147 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.147 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.148 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.149 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.149 189489 DEBUG nova.virt.libvirt.driver [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.158 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.205 189489 INFO nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Took 6.31 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.206 189489 DEBUG nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.272 189489 INFO nova.compute.manager [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Took 7.08 seconds to build instance.#033[00m
Nov 29 15:54:12 compute-0 nova_compute[189485]: 2025-11-29 15:54:12.286 189489 DEBUG oslo_concurrency.lockutils [None req-2b990fdc-8c62-4482-9d57-d2fadb878bfd 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.200s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:14 compute-0 nova_compute[189485]: 2025-11-29 15:54:14.173 189489 DEBUG nova.compute.manager [req-997989d8-6fa1-467d-a32a-135970963b31 req-a20543a7-9bde-4d78-9510-b3367f36c2e9 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:14 compute-0 nova_compute[189485]: 2025-11-29 15:54:14.174 189489 DEBUG oslo_concurrency.lockutils [req-997989d8-6fa1-467d-a32a-135970963b31 req-a20543a7-9bde-4d78-9510-b3367f36c2e9 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:14 compute-0 nova_compute[189485]: 2025-11-29 15:54:14.174 189489 DEBUG oslo_concurrency.lockutils [req-997989d8-6fa1-467d-a32a-135970963b31 req-a20543a7-9bde-4d78-9510-b3367f36c2e9 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:14 compute-0 nova_compute[189485]: 2025-11-29 15:54:14.175 189489 DEBUG oslo_concurrency.lockutils [req-997989d8-6fa1-467d-a32a-135970963b31 req-a20543a7-9bde-4d78-9510-b3367f36c2e9 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:14 compute-0 nova_compute[189485]: 2025-11-29 15:54:14.175 189489 DEBUG nova.compute.manager [req-997989d8-6fa1-467d-a32a-135970963b31 req-a20543a7-9bde-4d78-9510-b3367f36c2e9 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] No waiting events found dispatching network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:54:14 compute-0 nova_compute[189485]: 2025-11-29 15:54:14.175 189489 WARNING nova.compute.manager [req-997989d8-6fa1-467d-a32a-135970963b31 req-a20543a7-9bde-4d78-9510-b3367f36c2e9 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received unexpected event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:54:15 compute-0 nova_compute[189485]: 2025-11-29 15:54:15.057 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:15 compute-0 nova_compute[189485]: 2025-11-29 15:54:15.744 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:16.617 106814 DEBUG eventlet.wsgi.server [-] (106814) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:16.619 106814 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: Accept: */*#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: Connection: close#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: Content-Type: text/plain#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: Host: 169.254.169.254#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: User-Agent: curl/7.84.0#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: X-Forwarded-For: 10.100.0.11#015
Nov 29 15:54:16 compute-0 ovn_metadata_agent[106708]: X-Ovn-Network-Id: 539b3be1-041f-4cb0-bb96-caaac62c4d34 __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 15:54:16 compute-0 nova_compute[189485]: 2025-11-29 15:54:16.719 189489 DEBUG nova.compute.manager [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-changed-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:16 compute-0 nova_compute[189485]: 2025-11-29 15:54:16.720 189489 DEBUG nova.compute.manager [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Refreshing instance network info cache due to event network-changed-bc8a9aec-d49d-411d-8b11-6c05461f6ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:54:16 compute-0 nova_compute[189485]: 2025-11-29 15:54:16.720 189489 DEBUG oslo_concurrency.lockutils [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:54:16 compute-0 nova_compute[189485]: 2025-11-29 15:54:16.720 189489 DEBUG oslo_concurrency.lockutils [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:54:16 compute-0 nova_compute[189485]: 2025-11-29 15:54:16.721 189489 DEBUG nova.network.neutron [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Refreshing network info cache for port bc8a9aec-d49d-411d-8b11-6c05461f6ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:54:17 compute-0 nova_compute[189485]: 2025-11-29 15:54:17.894 189489 DEBUG nova.network.neutron [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updated VIF entry in instance network info cache for port bc8a9aec-d49d-411d-8b11-6c05461f6ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:54:17 compute-0 nova_compute[189485]: 2025-11-29 15:54:17.895 189489 DEBUG nova.network.neutron [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updating instance_info_cache with network_info: [{"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:17 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:17.899 106814 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 15:54:17 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:17.900 106814 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.2816684#033[00m
Nov 29 15:54:17 compute-0 haproxy-metadata-proxy-539b3be1-041f-4cb0-bb96-caaac62c4d34[253536]: 10.100.0.11:38776 [29/Nov/2025:15:54:16.615] listener listener/metadata 0/0/0/1285/1285 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Nov 29 15:54:17 compute-0 nova_compute[189485]: 2025-11-29 15:54:17.918 189489 DEBUG oslo_concurrency.lockutils [req-af744373-4690-4dc5-85b3-7a0499657fcc req-eb07dd58-9a59-4c64-88f7-9b081cadd855 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:18.041 106814 DEBUG eventlet.wsgi.server [-] (106814) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004#033[00m
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:18.042 106814 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: Accept: */*#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: Connection: close#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: Content-Length: 100#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: Content-Type: application/x-www-form-urlencoded#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: Host: 169.254.169.254#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: User-Agent: curl/7.84.0#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: X-Forwarded-For: 10.100.0.11#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: X-Ovn-Network-Id: 539b3be1-041f-4cb0-bb96-caaac62c4d34#015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: #015
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82#033[00m
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:18.406 106814 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161#033[00m
Nov 29 15:54:18 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:18.407 106814 INFO eventlet.wsgi.server [-] 10.100.0.11,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.3652291#033[00m
Nov 29 15:54:18 compute-0 haproxy-metadata-proxy-539b3be1-041f-4cb0-bb96-caaac62c4d34[253536]: 10.100.0.11:38780 [29/Nov/2025:15:54:18.041] listener listener/metadata 0/0/0/366/366 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Nov 29 15:54:18 compute-0 nova_compute[189485]: 2025-11-29 15:54:18.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.059 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.552 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.552 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.553 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.553 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.553 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.555 189489 INFO nova.compute.manager [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Terminating instance#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.555 189489 DEBUG nova.compute.manager [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:54:20 compute-0 kernel: tapfe0e2687-26 (unregistering): left promiscuous mode
Nov 29 15:54:20 compute-0 NetworkManager[56360]: <info>  [1764431660.5873] device (tapfe0e2687-26): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.594 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 ovn_controller[97827]: 2025-11-29T15:54:20Z|00148|binding|INFO|Releasing lport fe0e2687-2636-4247-a729-26a0e3c624a0 from this chassis (sb_readonly=0)
Nov 29 15:54:20 compute-0 ovn_controller[97827]: 2025-11-29T15:54:20Z|00149|binding|INFO|Setting lport fe0e2687-2636-4247-a729-26a0e3c624a0 down in Southbound
Nov 29 15:54:20 compute-0 ovn_controller[97827]: 2025-11-29T15:54:20Z|00150|binding|INFO|Removing iface tapfe0e2687-26 ovn-installed in OVS
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.599 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.609 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:09:15:fd 10.100.0.11'], port_security=['fa:16:3e:09:15:fd 10.100.0.11'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.11/28', 'neutron:device_id': '609941f8-b5e1-4f1f-9c99-5e4bc5f5b232', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'adde993c93894d9681ea78f0147c8a52', 'neutron:revision_number': '4', 'neutron:security_group_ids': '042dc84a-c12e-4a97-8a9b-39e0fd8bf0c1 78c56b68-6630-4687-9463-d645eaec30be', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=5ba151cd-a8f1-4763-b893-b48bfff2831b, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=fe0e2687-2636-4247-a729-26a0e3c624a0) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.610 106713 INFO neutron.agent.ovn.metadata.agent [-] Port fe0e2687-2636-4247-a729-26a0e3c624a0 in datapath 539b3be1-041f-4cb0-bb96-caaac62c4d34 unbound from our chassis#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.612 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 539b3be1-041f-4cb0-bb96-caaac62c4d34, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.613 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1cdb8472-f5d0-436e-b404-8ad71bf0e96b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.614 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34 namespace which is not needed anymore#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.623 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Nov 29 15:54:20 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 41.461s CPU time.
Nov 29 15:54:20 compute-0 systemd-machined[155802]: Machine qemu-13-instance-0000000c terminated.
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.746 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.822 189489 INFO nova.virt.libvirt.driver [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Instance destroyed successfully.#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.822 189489 DEBUG nova.objects.instance [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lazy-loading 'resources' on Instance uuid 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:54:20 compute-0 neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34[253530]: [NOTICE]   (253534) : haproxy version is 2.8.14-c23fe91
Nov 29 15:54:20 compute-0 neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34[253530]: [NOTICE]   (253534) : path to executable is /usr/sbin/haproxy
Nov 29 15:54:20 compute-0 neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34[253530]: [ALERT]    (253534) : Current worker (253536) exited with code 143 (Terminated)
Nov 29 15:54:20 compute-0 neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34[253530]: [WARNING]  (253534) : All workers exited. Exiting... (0)
Nov 29 15:54:20 compute-0 systemd[1]: libpod-d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00.scope: Deactivated successfully.
Nov 29 15:54:20 compute-0 podman[254145]: 2025-11-29 15:54:20.834556055 +0000 UTC m=+0.089968230 container died d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.844 189489 DEBUG nova.virt.libvirt.vif [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:52:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1957561350',display_name='tempest-TestServerBasicOps-server-1957561350',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1957561350',id=12,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBB1NUKtQLvUF7OdZp6tiYeKRLfsz+Nt9cU1aO0s91dgvdY4nJNMpSyly2TSvKLRn2+lzCNhuwawR/Kk2cuf6Rew+DV9gI/MN3TDcu77Sx36rOqqRNPSFHa+wNuYLRoFk0Q==',key_name='tempest-TestServerBasicOps-399626093',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:53:08Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='adde993c93894d9681ea78f0147c8a52',ramdisk_id='',reservation_id='r-s22176y8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-2084881187',owner_user_name='tempest-TestServerBasicOps-2084881187-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:54:18Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='6ffdcfadc95949538d09357b0b49d925',uuid=609941f8-b5e1-4f1f-9c99-5e4bc5f5b232,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.845 189489 DEBUG nova.network.os_vif_util [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Converting VIF {"id": "fe0e2687-2636-4247-a729-26a0e3c624a0", "address": "fa:16:3e:09:15:fd", "network": {"id": "539b3be1-041f-4cb0-bb96-caaac62c4d34", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1633809176-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.11", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.218", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "adde993c93894d9681ea78f0147c8a52", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapfe0e2687-26", "ovs_interfaceid": "fe0e2687-2636-4247-a729-26a0e3c624a0", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.847 189489 DEBUG nova.network.os_vif_util [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.848 189489 DEBUG os_vif [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.851 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.852 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapfe0e2687-26, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.856 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.859 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.863 189489 INFO os_vif [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:09:15:fd,bridge_name='br-int',has_traffic_filtering=True,id=fe0e2687-2636-4247-a729-26a0e3c624a0,network=Network(539b3be1-041f-4cb0-bb96-caaac62c4d34),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapfe0e2687-26')#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.864 189489 INFO nova.virt.libvirt.driver [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Deleting instance files /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232_del#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.869 189489 INFO nova.virt.libvirt.driver [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Deletion of /var/lib/nova/instances/609941f8-b5e1-4f1f-9c99-5e4bc5f5b232_del complete#033[00m
Nov 29 15:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00-userdata-shm.mount: Deactivated successfully.
Nov 29 15:54:20 compute-0 systemd[1]: var-lib-containers-storage-overlay-88e8482d023e56b912c84b171dd963a217695da6d47e0e7c443155a0c5b77bc7-merged.mount: Deactivated successfully.
Nov 29 15:54:20 compute-0 podman[254145]: 2025-11-29 15:54:20.902284406 +0000 UTC m=+0.157696581 container cleanup d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 15:54:20 compute-0 systemd[1]: libpod-conmon-d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00.scope: Deactivated successfully.
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.937 189489 INFO nova.compute.manager [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Took 0.38 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.938 189489 DEBUG oslo.service.loopingcall [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.938 189489 DEBUG nova.compute.manager [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.938 189489 DEBUG nova.network.neutron [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.943 189489 DEBUG nova.compute.manager [req-ed45f6bb-7aef-48e2-82cf-6e86f88b04a5 req-61f76aa2-56f3-432d-8513-bfbf76a21f1a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-vif-unplugged-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.943 189489 DEBUG oslo_concurrency.lockutils [req-ed45f6bb-7aef-48e2-82cf-6e86f88b04a5 req-61f76aa2-56f3-432d-8513-bfbf76a21f1a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.943 189489 DEBUG oslo_concurrency.lockutils [req-ed45f6bb-7aef-48e2-82cf-6e86f88b04a5 req-61f76aa2-56f3-432d-8513-bfbf76a21f1a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.943 189489 DEBUG oslo_concurrency.lockutils [req-ed45f6bb-7aef-48e2-82cf-6e86f88b04a5 req-61f76aa2-56f3-432d-8513-bfbf76a21f1a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.944 189489 DEBUG nova.compute.manager [req-ed45f6bb-7aef-48e2-82cf-6e86f88b04a5 req-61f76aa2-56f3-432d-8513-bfbf76a21f1a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] No waiting events found dispatching network-vif-unplugged-fe0e2687-2636-4247-a729-26a0e3c624a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.944 189489 DEBUG nova.compute.manager [req-ed45f6bb-7aef-48e2-82cf-6e86f88b04a5 req-61f76aa2-56f3-432d-8513-bfbf76a21f1a 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-vif-unplugged-fe0e2687-2636-4247-a729-26a0e3c624a0 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:54:20 compute-0 podman[254191]: 2025-11-29 15:54:20.98087979 +0000 UTC m=+0.049285917 container remove d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.990 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f06c9e14-183b-4f27-911a-22a9660f4e18]: (4, ('Sat Nov 29 03:54:20 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34 (d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00)\nd58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00\nSat Nov 29 03:54:20 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34 (d58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00)\nd58a40b6bf6f625bfea2c64f8421b30edd5425d03250756b1026a8f99f933a00\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.992 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[ebb7d328-ddeb-4f0f-ae03-264918fdae67]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:20 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:20.993 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap539b3be1-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:20 compute-0 nova_compute[189485]: 2025-11-29 15:54:20.995 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:20 compute-0 kernel: tap539b3be1-00: left promiscuous mode
Nov 29 15:54:21 compute-0 nova_compute[189485]: 2025-11-29 15:54:21.015 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:21.016 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[15864015-fb7e-446f-b0ef-dd491ff11e87]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:21.040 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8b5e05b6-9dba-496c-aa95-29b10763223c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:21.041 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8a2f654c-0cae-4988-aed2-fcbcdd3ed5bd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:21.057 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[3d1d175f-cf72-4110-a3f6-5d65d232be24]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 534194, 'reachable_time': 43315, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254204, 'error': None, 'target': 'ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:21.060 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-539b3be1-041f-4cb0-bb96-caaac62c4d34 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:54:21 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:21.060 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[3afd8032-b03f-4eb8-9846-0abd3aa33790]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:21 compute-0 systemd[1]: run-netns-ovnmeta\x2d539b3be1\x2d041f\x2d4cb0\x2dbb96\x2dcaaac62c4d34.mount: Deactivated successfully.
Nov 29 15:54:21 compute-0 nova_compute[189485]: 2025-11-29 15:54:21.859 189489 DEBUG nova.network.neutron [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:21 compute-0 nova_compute[189485]: 2025-11-29 15:54:21.877 189489 INFO nova.compute.manager [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Took 0.94 seconds to deallocate network for instance.#033[00m
Nov 29 15:54:21 compute-0 nova_compute[189485]: 2025-11-29 15:54:21.916 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:21 compute-0 nova_compute[189485]: 2025-11-29 15:54:21.917 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.016 189489 DEBUG nova.compute.provider_tree [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.030 189489 DEBUG nova.scheduler.client.report [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.065 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.119 189489 INFO nova.scheduler.client.report [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Deleted allocations for instance 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.174 189489 DEBUG oslo_concurrency.lockutils [None req-0a8c4c6f-7ee5-47b8-8aba-9999f72c3467 6ffdcfadc95949538d09357b0b49d925 adde993c93894d9681ea78f0147c8a52 - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.622s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:54:22 compute-0 nova_compute[189485]: 2025-11-29 15:54:22.514 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.078 189489 DEBUG nova.compute.manager [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.078 189489 DEBUG oslo_concurrency.lockutils [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.079 189489 DEBUG oslo_concurrency.lockutils [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.079 189489 DEBUG oslo_concurrency.lockutils [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "609941f8-b5e1-4f1f-9c99-5e4bc5f5b232-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.080 189489 DEBUG nova.compute.manager [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] No waiting events found dispatching network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.080 189489 WARNING nova.compute.manager [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received unexpected event network-vif-plugged-fe0e2687-2636-4247-a729-26a0e3c624a0 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.081 189489 DEBUG nova.compute.manager [req-49b248e0-8eb3-4085-acbb-3f2ae852ecb1 req-b7f2a6c5-2520-4295-bb74-22d1911440e0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Received event network-vif-deleted-fe0e2687-2636-4247-a729-26a0e3c624a0 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:23 compute-0 nova_compute[189485]: 2025-11-29 15:54:23.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.510 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.510 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.511 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.511 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.620 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:24 compute-0 podman[254205]: 2025-11-29 15:54:24.648606717 +0000 UTC m=+0.099575299 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.686 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.687 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.748 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.754 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.808 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.808 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:24 compute-0 nova_compute[189485]: 2025-11-29 15:54:24.864 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.231 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.233 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5070MB free_disk=72.27680587768555GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.233 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.234 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.335 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.335 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance f8649788-26c9-4497-a517-f989c3c9cdb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.335 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.336 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.443 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.462 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.486 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.486 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.749 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:25 compute-0 nova_compute[189485]: 2025-11-29 15:54:25.856 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:26 compute-0 nova_compute[189485]: 2025-11-29 15:54:26.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:26 compute-0 nova_compute[189485]: 2025-11-29 15:54:26.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:26 compute-0 nova_compute[189485]: 2025-11-29 15:54:26.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:27 compute-0 nova_compute[189485]: 2025-11-29 15:54:27.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:29 compute-0 nova_compute[189485]: 2025-11-29 15:54:29.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:54:29 compute-0 nova_compute[189485]: 2025-11-29 15:54:29.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:54:29 compute-0 podman[203677]: time="2025-11-29T15:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:54:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:54:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5263 "" "Go-http-client/1.1"
Nov 29 15:54:30 compute-0 nova_compute[189485]: 2025-11-29 15:54:30.752 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:30 compute-0 nova_compute[189485]: 2025-11-29 15:54:30.859 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:30 compute-0 ovn_controller[97827]: 2025-11-29T15:54:30Z|00151|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:54:30 compute-0 ovn_controller[97827]: 2025-11-29T15:54:30Z|00152|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:54:31 compute-0 nova_compute[189485]: 2025-11-29 15:54:31.076 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: ERROR   15:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: ERROR   15:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: ERROR   15:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: ERROR   15:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:54:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:54:34 compute-0 podman[254241]: 2025-11-29 15:54:34.672304987 +0000 UTC m=+0.108404227 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 15:54:35 compute-0 nova_compute[189485]: 2025-11-29 15:54:35.756 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:35 compute-0 nova_compute[189485]: 2025-11-29 15:54:35.815 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431660.8135335, 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:35 compute-0 nova_compute[189485]: 2025-11-29 15:54:35.815 189489 INFO nova.compute.manager [-] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:54:35 compute-0 ovn_controller[97827]: 2025-11-29T15:54:35Z|00153|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:54:35 compute-0 ovn_controller[97827]: 2025-11-29T15:54:35Z|00154|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:54:35 compute-0 nova_compute[189485]: 2025-11-29 15:54:35.863 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:35 compute-0 nova_compute[189485]: 2025-11-29 15:54:35.864 189489 DEBUG nova.compute.manager [None req-ea821c3b-26d4-4372-9d92-8e520b2cc3ad - - - - - -] [instance: 609941f8-b5e1-4f1f-9c99-5e4bc5f5b232] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:35 compute-0 nova_compute[189485]: 2025-11-29 15:54:35.889 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:36 compute-0 podman[254262]: 2025-11-29 15:54:36.67181766 +0000 UTC m=+0.110635937 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 15:54:36 compute-0 podman[254263]: 2025-11-29 15:54:36.671990434 +0000 UTC m=+0.116647857 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:54:36 compute-0 podman[254261]: 2025-11-29 15:54:36.677489122 +0000 UTC m=+0.121432077 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.openshift.expose-services=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 29 15:54:36 compute-0 podman[254265]: 2025-11-29 15:54:36.689358762 +0000 UTC m=+0.126418851 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vendor=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64)
Nov 29 15:54:36 compute-0 podman[254264]: 2025-11-29 15:54:36.707055447 +0000 UTC m=+0.143911761 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:54:40 compute-0 podman[254358]: 2025-11-29 15:54:40.68323283 +0000 UTC m=+0.133218784 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible)
Nov 29 15:54:40 compute-0 nova_compute[189485]: 2025-11-29 15:54:40.758 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:40 compute-0 nova_compute[189485]: 2025-11-29 15:54:40.865 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.531 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.533 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.555 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.650 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.651 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.664 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.665 189489 INFO nova.compute.claims [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.871 189489 DEBUG nova.compute.provider_tree [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.893 189489 DEBUG nova.scheduler.client.report [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.923 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.272s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.925 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.976 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:54:41 compute-0 nova_compute[189485]: 2025-11-29 15:54:41.977 189489 DEBUG nova.network.neutron [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.011 189489 INFO nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.029 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.123 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.124 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.125 189489 INFO nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Creating image(s)#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.126 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "/var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.126 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "/var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.127 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "/var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.145 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.210 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.211 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "bc62df192b9cc3765848644231821ffd9bd86fa9" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.212 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "bc62df192b9cc3765848644231821ffd9bd86fa9" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.223 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.280 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.281 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9,backing_fmt=raw /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.324 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9,backing_fmt=raw /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.325 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "bc62df192b9cc3765848644231821ffd9bd86fa9" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.325 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.385 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.387 189489 DEBUG nova.virt.disk.api [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Checking if we can resize image /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.387 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.465 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.466 189489 DEBUG nova.virt.disk.api [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Cannot resize image /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.467 189489 DEBUG nova.objects.instance [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lazy-loading 'migration_context' on Instance uuid a1c56ffa-6d1c-408c-8667-517745513fd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.480 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.480 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Ensure instance console log exists: /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.481 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.482 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.482 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:42 compute-0 podman[254394]: 2025-11-29 15:54:42.650124036 +0000 UTC m=+0.098853180 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:54:42 compute-0 nova_compute[189485]: 2025-11-29 15:54:42.819 189489 DEBUG nova.policy [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '997fde32c4f7472e87493536b60e7b64', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:54:43 compute-0 nova_compute[189485]: 2025-11-29 15:54:43.575 189489 DEBUG nova.network.neutron [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Successfully created port: 05c6eb06-b3ad-4a74-8b52-5aa37a365626 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:54:45 compute-0 ovn_controller[97827]: 2025-11-29T15:54:45Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:7e:5f:3b 10.100.0.10
Nov 29 15:54:45 compute-0 ovn_controller[97827]: 2025-11-29T15:54:45Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:7e:5f:3b 10.100.0.10
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.759 189489 DEBUG nova.network.neutron [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Successfully updated port: 05c6eb06-b3ad-4a74-8b52-5aa37a365626 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.764 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.781 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.782 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquired lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.782 189489 DEBUG nova.network.neutron [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.868 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.937 189489 DEBUG nova.compute.manager [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-changed-05c6eb06-b3ad-4a74-8b52-5aa37a365626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.937 189489 DEBUG nova.compute.manager [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Refreshing instance network info cache due to event network-changed-05c6eb06-b3ad-4a74-8b52-5aa37a365626. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:54:45 compute-0 nova_compute[189485]: 2025-11-29 15:54:45.938 189489 DEBUG oslo_concurrency.lockutils [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:54:46 compute-0 nova_compute[189485]: 2025-11-29 15:54:46.013 189489 DEBUG nova.network.neutron [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:54:46 compute-0 nova_compute[189485]: 2025-11-29 15:54:46.108 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.133 189489 DEBUG nova.network.neutron [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.172 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Releasing lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.173 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Instance network_info: |[{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.173 189489 DEBUG oslo_concurrency.lockutils [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.174 189489 DEBUG nova.network.neutron [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Refreshing network info cache for port 05c6eb06-b3ad-4a74-8b52-5aa37a365626 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.178 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Start _get_guest_xml network_info=[{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:51:36Z,direct_url=<?>,disk_format='qcow2',id=276c0a04-08bd-40bb-ad7b-a0be69fa4466,min_disk=0,min_ram=0,name='tempest-scenario-img--1468111566',owner='cb266773cd4c4eb0904e7249f2b6cb92',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:51:38Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.186 189489 WARNING nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.209 189489 DEBUG nova.virt.libvirt.host [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.210 189489 DEBUG nova.virt.libvirt.host [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.215 189489 DEBUG nova.virt.libvirt.host [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.216 189489 DEBUG nova.virt.libvirt.host [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.217 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.218 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:51:36Z,direct_url=<?>,disk_format='qcow2',id=276c0a04-08bd-40bb-ad7b-a0be69fa4466,min_disk=0,min_ram=0,name='tempest-scenario-img--1468111566',owner='cb266773cd4c4eb0904e7249f2b6cb92',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:51:38Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.219 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.220 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.221 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.222 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.223 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.224 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.225 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.226 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.226 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.227 189489 DEBUG nova.virt.hardware [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.234 189489 DEBUG nova.virt.libvirt.vif [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo',id=14,image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='4838e190-17b5-46fc-b5c5-64e289c1eccb'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb266773cd4c4eb0904e7249f2b6cb92',ramdisk_id='',reservation_id='r-n6js5k2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-739897620',owner_user_name='tempest-PrometheusGabbiTest-739897620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:54:42Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='997fde32c4f7472e87493536b60e7b64',uuid=a1c56ffa-6d1c-408c-8667-517745513fd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.236 189489 DEBUG nova.network.os_vif_util [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converting VIF {"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.237 189489 DEBUG nova.network.os_vif_util [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.238 189489 DEBUG nova.objects.instance [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lazy-loading 'pci_devices' on Instance uuid a1c56ffa-6d1c-408c-8667-517745513fd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.255 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <uuid>a1c56ffa-6d1c-408c-8667-517745513fd0</uuid>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <name>instance-0000000e</name>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:name>te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo</nova:name>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:54:47</nova:creationTime>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:user uuid="997fde32c4f7472e87493536b60e7b64">tempest-PrometheusGabbiTest-739897620-project-member</nova:user>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:project uuid="cb266773cd4c4eb0904e7249f2b6cb92">tempest-PrometheusGabbiTest-739897620</nova:project>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="276c0a04-08bd-40bb-ad7b-a0be69fa4466"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        <nova:port uuid="05c6eb06-b3ad-4a74-8b52-5aa37a365626">
Nov 29 15:54:47 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.182" ipVersion="4"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <system>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <entry name="serial">a1c56ffa-6d1c-408c-8667-517745513fd0</entry>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <entry name="uuid">a1c56ffa-6d1c-408c-8667-517745513fd0</entry>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </system>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <os>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </os>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <features>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </features>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.config"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:0e:87:f3"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <target dev="tap05c6eb06-b3"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/console.log" append="off"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <video>
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </video>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:54:47 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:54:47 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:54:47 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:54:47 compute-0 nova_compute[189485]: </domain>
Nov 29 15:54:47 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.256 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Preparing to wait for external event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.256 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.257 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.257 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.258 189489 DEBUG nova.virt.libvirt.vif [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo',id=14,image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='4838e190-17b5-46fc-b5c5-64e289c1eccb'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='cb266773cd4c4eb0904e7249f2b6cb92',ramdisk_id='',reservation_id='r-n6js5k2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-739897620',owner_user_name='tempest-PrometheusGabbiTest-739897620-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:54:42Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='997fde32c4f7472e87493536b60e7b64',uuid=a1c56ffa-6d1c-408c-8667-517745513fd0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.258 189489 DEBUG nova.network.os_vif_util [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converting VIF {"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.259 189489 DEBUG nova.network.os_vif_util [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.259 189489 DEBUG os_vif [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.260 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.260 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.261 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.265 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.265 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap05c6eb06-b3, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.266 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap05c6eb06-b3, col_values=(('external_ids', {'iface-id': '05c6eb06-b3ad-4a74-8b52-5aa37a365626', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0e:87:f3', 'vm-uuid': 'a1c56ffa-6d1c-408c-8667-517745513fd0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:47 compute-0 NetworkManager[56360]: <info>  [1764431687.2685] manager: (tap05c6eb06-b3): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/72)
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.267 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.271 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.275 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.275 189489 INFO os_vif [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3')#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.338 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.338 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.338 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] No VIF found with MAC fa:16:3e:0e:87:f3, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.339 189489 INFO nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Using config drive#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.706 189489 INFO nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Creating config drive at /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.config#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.712 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4x4e2y52 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.838 189489 DEBUG oslo_concurrency.processutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp4x4e2y52" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:54:47 compute-0 NetworkManager[56360]: <info>  [1764431687.8945] manager: (tap05c6eb06-b3): new Tun device (/org/freedesktop/NetworkManager/Devices/73)
Nov 29 15:54:47 compute-0 kernel: tap05c6eb06-b3: entered promiscuous mode
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.902 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 ovn_controller[97827]: 2025-11-29T15:54:47Z|00155|binding|INFO|Claiming lport 05c6eb06-b3ad-4a74-8b52-5aa37a365626 for this chassis.
Nov 29 15:54:47 compute-0 ovn_controller[97827]: 2025-11-29T15:54:47Z|00156|binding|INFO|05c6eb06-b3ad-4a74-8b52-5aa37a365626: Claiming fa:16:3e:0e:87:f3 10.100.0.182
Nov 29 15:54:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:47.909 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:87:f3 10.100.0.182'], port_security=['fa:16:3e:0e:87:f3 10.100.0.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.182/16', 'neutron:device_id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'b5e134a6-ec2b-4ce9-9b80-87ce5b922531', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=517fd69e-9ef0-4dda-87e3-69c54b736518, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=05c6eb06-b3ad-4a74-8b52-5aa37a365626) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:54:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:47.909 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 05c6eb06-b3ad-4a74-8b52-5aa37a365626 in datapath 7871c73c-0a09-4317-aff1-d5a297fb41ee bound to our chassis#033[00m
Nov 29 15:54:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:47.911 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7871c73c-0a09-4317-aff1-d5a297fb41ee#033[00m
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.920 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 ovn_controller[97827]: 2025-11-29T15:54:47Z|00157|binding|INFO|Setting lport 05c6eb06-b3ad-4a74-8b52-5aa37a365626 ovn-installed in OVS
Nov 29 15:54:47 compute-0 ovn_controller[97827]: 2025-11-29T15:54:47Z|00158|binding|INFO|Setting lport 05c6eb06-b3ad-4a74-8b52-5aa37a365626 up in Southbound
Nov 29 15:54:47 compute-0 nova_compute[189485]: 2025-11-29 15:54:47.931 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:47.930 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[b2ee56a5-b49c-49f8-b852-60b4df5933e3]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:47 compute-0 systemd-machined[155802]: New machine qemu-15-instance-0000000e.
Nov 29 15:54:47 compute-0 systemd-udevd[254454]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:54:47 compute-0 NetworkManager[56360]: <info>  [1764431687.9661] device (tap05c6eb06-b3): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:54:47 compute-0 NetworkManager[56360]: <info>  [1764431687.9667] device (tap05c6eb06-b3): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:54:47 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Nov 29 15:54:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:47.970 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[19a9ccf5-c307-4958-a947-cc425716eae6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:47.974 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[abf9aa7a-b235-49fc-aeb7-3c2239a894a1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.004 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[874bb64f-cafc-4814-8922-9b7df01019fc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.021 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[77b09b0e-42c5-4767-8330-99b7bfc19ae2]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7871c73c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:cd:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527242, 'reachable_time': 23593, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254460, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.037 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1b710614-0a10-4812-ac88-7652ed6e5389]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7871c73c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527251, 'tstamp': 527251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254465, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap7871c73c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527254, 'tstamp': 527254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254465, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.038 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7871c73c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.040 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.041 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7871c73c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.042 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.042 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.042 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7871c73c-00, col_values=(('external_ids', {'iface-id': '44ccce0e-f764-41d1-8796-ff08932a6de2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:54:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:48.043 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.517 189489 DEBUG nova.network.neutron [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updated VIF entry in instance network info cache for port 05c6eb06-b3ad-4a74-8b52-5aa37a365626. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.518 189489 DEBUG nova.network.neutron [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.537 189489 DEBUG oslo_concurrency.lockutils [req-097c6bc1-763f-49be-9479-7188cb95cbb1 req-5f04462b-7030-4c27-ab7c-5719ee7ea447 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.552 189489 DEBUG nova.compute.manager [req-8dbe770c-bc0e-4022-b662-fe46a0ed684e req-103a513e-e16f-42ae-9944-5a26e1c398e2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.552 189489 DEBUG oslo_concurrency.lockutils [req-8dbe770c-bc0e-4022-b662-fe46a0ed684e req-103a513e-e16f-42ae-9944-5a26e1c398e2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.553 189489 DEBUG oslo_concurrency.lockutils [req-8dbe770c-bc0e-4022-b662-fe46a0ed684e req-103a513e-e16f-42ae-9944-5a26e1c398e2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.553 189489 DEBUG oslo_concurrency.lockutils [req-8dbe770c-bc0e-4022-b662-fe46a0ed684e req-103a513e-e16f-42ae-9944-5a26e1c398e2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.553 189489 DEBUG nova.compute.manager [req-8dbe770c-bc0e-4022-b662-fe46a0ed684e req-103a513e-e16f-42ae-9944-5a26e1c398e2 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Processing event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.912 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431688.9117079, a1c56ffa-6d1c-408c-8667-517745513fd0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.913 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] VM Started (Lifecycle Event)#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.920 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.937 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.938 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.947 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.953 189489 INFO nova.virt.libvirt.driver [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Instance spawned successfully.#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.954 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.978 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.979 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431688.9157403, a1c56ffa-6d1c-408c-8667-517745513fd0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:48 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.979 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.995 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.996 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.997 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.998 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:48.999 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.000 189489 DEBUG nova.virt.libvirt.driver [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.635 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.637 189489 INFO nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Took 7.51 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.637 189489 DEBUG nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.645 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431688.9382837, a1c56ffa-6d1c-408c-8667-517745513fd0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.646 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.697 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.702 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.735 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.749 189489 INFO nova.compute.manager [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Took 8.14 seconds to build instance.#033[00m
Nov 29 15:54:49 compute-0 nova_compute[189485]: 2025-11-29 15:54:49.771 189489 DEBUG oslo_concurrency.lockutils [None req-1119fc88-9783-4e93-9ab0-3f02726eb09c 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.694 189489 DEBUG nova.compute.manager [req-d15fae5c-da25-4d56-bee6-2c674d33a67b req-83bff6d9-130a-4799-a645-375533bd840e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.695 189489 DEBUG oslo_concurrency.lockutils [req-d15fae5c-da25-4d56-bee6-2c674d33a67b req-83bff6d9-130a-4799-a645-375533bd840e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.695 189489 DEBUG oslo_concurrency.lockutils [req-d15fae5c-da25-4d56-bee6-2c674d33a67b req-83bff6d9-130a-4799-a645-375533bd840e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.695 189489 DEBUG oslo_concurrency.lockutils [req-d15fae5c-da25-4d56-bee6-2c674d33a67b req-83bff6d9-130a-4799-a645-375533bd840e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.695 189489 DEBUG nova.compute.manager [req-d15fae5c-da25-4d56-bee6-2c674d33a67b req-83bff6d9-130a-4799-a645-375533bd840e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] No waiting events found dispatching network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.696 189489 WARNING nova.compute.manager [req-d15fae5c-da25-4d56-bee6-2c674d33a67b req-83bff6d9-130a-4799-a645-375533bd840e 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received unexpected event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.717 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:50 compute-0 nova_compute[189485]: 2025-11-29 15:54:50.765 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:52 compute-0 nova_compute[189485]: 2025-11-29 15:54:52.038 189489 INFO nova.compute.manager [None req-d6b83ad8-9042-4d4d-ba47-aa5e9d288d6e 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Get console output#033[00m
Nov 29 15:54:52 compute-0 nova_compute[189485]: 2025-11-29 15:54:52.130 239607 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 15:54:52 compute-0 nova_compute[189485]: 2025-11-29 15:54:52.269 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:53.708 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:54:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:53.710 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 8 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:54:53 compute-0 nova_compute[189485]: 2025-11-29 15:54:53.721 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:54 compute-0 nova_compute[189485]: 2025-11-29 15:54:54.440 189489 DEBUG nova.compute.manager [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-changed-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:54:54 compute-0 nova_compute[189485]: 2025-11-29 15:54:54.441 189489 DEBUG nova.compute.manager [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Refreshing instance network info cache due to event network-changed-bc8a9aec-d49d-411d-8b11-6c05461f6ed4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:54:54 compute-0 nova_compute[189485]: 2025-11-29 15:54:54.441 189489 DEBUG oslo_concurrency.lockutils [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:54:54 compute-0 nova_compute[189485]: 2025-11-29 15:54:54.441 189489 DEBUG oslo_concurrency.lockutils [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:54:54 compute-0 nova_compute[189485]: 2025-11-29 15:54:54.441 189489 DEBUG nova.network.neutron [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Refreshing network info cache for port bc8a9aec-d49d-411d-8b11-6c05461f6ed4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:54:55 compute-0 podman[254476]: 2025-11-29 15:54:55.692402834 +0000 UTC m=+0.130086990 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:54:55 compute-0 nova_compute[189485]: 2025-11-29 15:54:55.768 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:56 compute-0 nova_compute[189485]: 2025-11-29 15:54:56.882 189489 DEBUG nova.network.neutron [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updated VIF entry in instance network info cache for port bc8a9aec-d49d-411d-8b11-6c05461f6ed4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:54:56 compute-0 nova_compute[189485]: 2025-11-29 15:54:56.883 189489 DEBUG nova.network.neutron [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updating instance_info_cache with network_info: [{"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:54:56 compute-0 nova_compute[189485]: 2025-11-29 15:54:56.904 189489 DEBUG oslo_concurrency.lockutils [req-b02fcbd8-255b-4d65-bac4-de55554a55bf req-b1b89c71-fcec-4155-8179-c0cd7dec552b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-f8649788-26c9-4497-a517-f989c3c9cdb7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:54:57 compute-0 nova_compute[189485]: 2025-11-29 15:54:57.141 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:57 compute-0 nova_compute[189485]: 2025-11-29 15:54:57.271 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:54:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:59.212 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:54:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:59.213 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:54:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:54:59.213 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:54:59 compute-0 podman[203677]: time="2025-11-29T15:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:54:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:54:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5257 "" "Go-http-client/1.1"
Nov 29 15:55:00 compute-0 nova_compute[189485]: 2025-11-29 15:55:00.772 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: ERROR   15:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: ERROR   15:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: ERROR   15:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: ERROR   15:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:55:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:55:01 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:01.713 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:02 compute-0 nova_compute[189485]: 2025-11-29 15:55:02.274 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:02 compute-0 nova_compute[189485]: 2025-11-29 15:55:02.987 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.671 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.673 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.702 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.785 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.786 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.798 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Nov 29 15:55:03 compute-0 nova_compute[189485]: 2025-11-29 15:55:03.799 189489 INFO nova.compute.claims [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Claim successful on node compute-0.ctlplane.example.com#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.036 189489 DEBUG nova.compute.provider_tree [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.058 189489 DEBUG nova.scheduler.client.report [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.085 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.086 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.159 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.159 189489 DEBUG nova.network.neutron [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.199 189489 INFO nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.436 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.526 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.530 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.531 189489 INFO nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Creating image(s)#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.533 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "/var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.534 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "/var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.535 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "/var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.566 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.652 189489 DEBUG nova.policy [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '08fa71399ec746088caaa6ce113cf5bc', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'aac53958ac1141be8c52323cdbc3e956', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.684 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.118s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.685 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.686 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.710 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.768 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.770 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.817 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1,backing_fmt=raw /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.819 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "c7e712fd6afdf0909a364074b7f15b004ad35ab1" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.820 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.882 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.884 189489 DEBUG nova.virt.disk.api [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Checking if we can resize image /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.885 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.950 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.952 189489 DEBUG nova.virt.disk.api [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Cannot resize image /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.953 189489 DEBUG nova.objects.instance [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lazy-loading 'migration_context' on Instance uuid e88c51da-0fd1-40c7-9084-fb672a0ac109 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.970 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.972 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Ensure instance console log exists: /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.974 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.975 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:04 compute-0 nova_compute[189485]: 2025-11-29 15:55:04.976 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:05 compute-0 podman[254516]: 2025-11-29 15:55:05.70223419 +0000 UTC m=+0.137525709 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Nov 29 15:55:05 compute-0 nova_compute[189485]: 2025-11-29 15:55:05.775 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:05 compute-0 nova_compute[189485]: 2025-11-29 15:55:05.792 189489 DEBUG nova.network.neutron [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Successfully created port: f2519551-d78d-4d96-b57a-13c24687d7d6 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.104 189489 DEBUG nova.network.neutron [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Successfully updated port: f2519551-d78d-4d96-b57a-13c24687d7d6 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.158 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.159 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquired lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.160 189489 DEBUG nova.network.neutron [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.236 189489 DEBUG nova.compute.manager [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-changed-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.237 189489 DEBUG nova.compute.manager [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Refreshing instance network info cache due to event network-changed-f2519551-d78d-4d96-b57a-13c24687d7d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.238 189489 DEBUG oslo_concurrency.lockutils [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.277 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:07 compute-0 nova_compute[189485]: 2025-11-29 15:55:07.311 189489 DEBUG nova.network.neutron [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Nov 29 15:55:07 compute-0 podman[254537]: 2025-11-29 15:55:07.689993388 +0000 UTC m=+0.104967404 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125)
Nov 29 15:55:07 compute-0 podman[254548]: 2025-11-29 15:55:07.711366202 +0000 UTC m=+0.109768543 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 15:55:07 compute-0 podman[254535]: 2025-11-29 15:55:07.726827278 +0000 UTC m=+0.160127508 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2024-09-18T21:23:30, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, managed_by=edpm_ansible, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64)
Nov 29 15:55:07 compute-0 podman[254536]: 2025-11-29 15:55:07.730428634 +0000 UTC m=+0.150859247 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 15:55:07 compute-0 podman[254543]: 2025-11-29 15:55:07.751987705 +0000 UTC m=+0.162637015 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible)
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.086 189489 DEBUG nova.network.neutron [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Updating instance_info_cache with network_info: [{"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.129 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Releasing lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.130 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Instance network_info: |[{"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.131 189489 DEBUG oslo_concurrency.lockutils [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.131 189489 DEBUG nova.network.neutron [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Refreshing network info cache for port f2519551-d78d-4d96-b57a-13c24687d7d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.134 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Start _get_guest_xml network_info=[{"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'device_name': '/dev/vda', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_options': None, 'encryption_format': None, 'size': 0, 'guest_format': None, 'encrypted': False, 'image_id': '6a931c3a-089f-4276-ac71-a0da3ffce7c7'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.142 189489 WARNING nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.148 189489 DEBUG nova.virt.libvirt.host [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.149 189489 DEBUG nova.virt.libvirt.host [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.154 189489 DEBUG nova.virt.libvirt.host [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.155 189489 DEBUG nova.virt.libvirt.host [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.155 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.156 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-11-29T15:49:08Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='cde1daa0-956a-446c-a1eb-2046e0cd1fa7',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2025-11-29T15:49:10Z,direct_url=<?>,disk_format='qcow2',id=6a931c3a-089f-4276-ac71-a0da3ffce7c7,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='04d676205d9142d19f3d4ce7389f72a2',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2025-11-29T15:49:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.157 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.157 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.158 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.158 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.158 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.159 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.159 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.160 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.160 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.161 189489 DEBUG nova.virt.hardware [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.164 189489 DEBUG nova.virt.libvirt.vif [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:55:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1759961201',display_name='tempest-TestNetworkBasicOps-server-1759961201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1759961201',id=15,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZy2hvTYFO/rYWDP0SPQtmW14+hvIgoA8FFJMbb720PdMfA9owmAb/O98hPijQ8mmc3EFgtLFDl3IaUuyfi9u9aOm0NyLvIfNjgQtC1NwsBVMqXTkP8qYk1Tg6wQU2zSg==',key_name='tempest-TestNetworkBasicOps-882265342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aac53958ac1141be8c52323cdbc3e956',ramdisk_id='',reservation_id='r-2ryrudo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-729114730',owner_user_name='tempest-TestNetworkBasicOps-729114730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:55:04Z,user_data=None,user_id='08fa71399ec746088caaa6ce113cf5bc',uuid=e88c51da-0fd1-40c7-9084-fb672a0ac109,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.165 189489 DEBUG nova.network.os_vif_util [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converting VIF {"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.166 189489 DEBUG nova.network.os_vif_util [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.167 189489 DEBUG nova.objects.instance [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lazy-loading 'pci_devices' on Instance uuid e88c51da-0fd1-40c7-9084-fb672a0ac109 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.196 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] End _get_guest_xml xml=<domain type="kvm">
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <uuid>e88c51da-0fd1-40c7-9084-fb672a0ac109</uuid>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <name>instance-0000000f</name>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <memory>131072</memory>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <vcpu>1</vcpu>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <metadata>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:name>tempest-TestNetworkBasicOps-server-1759961201</nova:name>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:creationTime>2025-11-29 15:55:09</nova:creationTime>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:flavor name="m1.nano">
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:memory>128</nova:memory>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:disk>1</nova:disk>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:swap>0</nova:swap>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:ephemeral>0</nova:ephemeral>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:vcpus>1</nova:vcpus>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      </nova:flavor>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:owner>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:user uuid="08fa71399ec746088caaa6ce113cf5bc">tempest-TestNetworkBasicOps-729114730-project-member</nova:user>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:project uuid="aac53958ac1141be8c52323cdbc3e956">tempest-TestNetworkBasicOps-729114730</nova:project>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      </nova:owner>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:root type="image" uuid="6a931c3a-089f-4276-ac71-a0da3ffce7c7"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <nova:ports>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        <nova:port uuid="f2519551-d78d-4d96-b57a-13c24687d7d6">
Nov 29 15:55:09 compute-0 nova_compute[189485]:          <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:        </nova:port>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      </nova:ports>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </nova:instance>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </metadata>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <sysinfo type="smbios">
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <system>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <entry name="manufacturer">RDO</entry>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <entry name="product">OpenStack Compute</entry>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <entry name="serial">e88c51da-0fd1-40c7-9084-fb672a0ac109</entry>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <entry name="uuid">e88c51da-0fd1-40c7-9084-fb672a0ac109</entry>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <entry name="family">Virtual Machine</entry>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </system>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </sysinfo>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <os>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <type arch="x86_64" machine="q35">hvm</type>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <boot dev="hd"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <smbios mode="sysinfo"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </os>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <features>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <acpi/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <apic/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <vmcoreinfo/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </features>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <clock offset="utc">
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <timer name="pit" tickpolicy="delay"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <timer name="rtc" tickpolicy="catchup"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <timer name="hpet" present="no"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </clock>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <cpu mode="host-model" match="exact">
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <topology sockets="1" cores="1" threads="1"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </cpu>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  <devices>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <disk type="file" device="disk">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <driver name="qemu" type="qcow2" cache="none"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <target dev="vda" bus="virtio"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <disk type="file" device="cdrom">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <driver name="qemu" type="raw" cache="none"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <source file="/var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.config"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <target dev="sda" bus="sata"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </disk>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <interface type="ethernet">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <mac address="fa:16:3e:4f:96:51"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <driver name="vhost" rx_queue_size="512"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <mtu size="1442"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <target dev="tapf2519551-d7"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </interface>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <serial type="pty">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <log file="/var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/console.log" append="off"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </serial>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <video>
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <model type="virtio"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </video>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <input type="tablet" bus="usb"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <rng model="virtio">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <backend model="random">/dev/urandom</backend>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </rng>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="pci" model="pcie-root-port"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <controller type="usb" index="0"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    <memballoon model="virtio">
Nov 29 15:55:09 compute-0 nova_compute[189485]:      <stats period="10"/>
Nov 29 15:55:09 compute-0 nova_compute[189485]:    </memballoon>
Nov 29 15:55:09 compute-0 nova_compute[189485]:  </devices>
Nov 29 15:55:09 compute-0 nova_compute[189485]: </domain>
Nov 29 15:55:09 compute-0 nova_compute[189485]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.210 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Preparing to wait for external event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.210 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.210 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.211 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.211 189489 DEBUG nova.virt.libvirt.vif [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-11-29T15:55:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1759961201',display_name='tempest-TestNetworkBasicOps-server-1759961201',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1759961201',id=15,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZy2hvTYFO/rYWDP0SPQtmW14+hvIgoA8FFJMbb720PdMfA9owmAb/O98hPijQ8mmc3EFgtLFDl3IaUuyfi9u9aOm0NyLvIfNjgQtC1NwsBVMqXTkP8qYk1Tg6wQU2zSg==',key_name='tempest-TestNetworkBasicOps-882265342',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='aac53958ac1141be8c52323cdbc3e956',ramdisk_id='',reservation_id='r-2ryrudo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestNetworkBasicOps-729114730',owner_user_name='tempest-TestNetworkBasicOps-729114730-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-11-29T15:55:04Z,user_data=None,user_id='08fa71399ec746088caaa6ce113cf5bc',uuid=e88c51da-0fd1-40c7-9084-fb672a0ac109,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.212 189489 DEBUG nova.network.os_vif_util [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converting VIF {"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.212 189489 DEBUG nova.network.os_vif_util [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.212 189489 DEBUG os_vif [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.213 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.213 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.214 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.218 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.219 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf2519551-d7, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.220 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf2519551-d7, col_values=(('external_ids', {'iface-id': 'f2519551-d78d-4d96-b57a-13c24687d7d6', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4f:96:51', 'vm-uuid': 'e88c51da-0fd1-40c7-9084-fb672a0ac109'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:09 compute-0 NetworkManager[56360]: <info>  [1764431709.2230] manager: (tapf2519551-d7): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/74)
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.222 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.230 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.232 189489 INFO os_vif [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7')#033[00m
Nov 29 15:55:09 compute-0 ovn_controller[97827]: 2025-11-29T15:55:09Z|00159|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:55:09 compute-0 ovn_controller[97827]: 2025-11-29T15:55:09Z|00160|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.367 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.374 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.374 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.374 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] No VIF found with MAC fa:16:3e:4f:96:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.375 189489 INFO nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Using config drive#033[00m
Nov 29 15:55:09 compute-0 ovn_controller[97827]: 2025-11-29T15:55:09Z|00161|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:55:09 compute-0 ovn_controller[97827]: 2025-11-29T15:55:09Z|00162|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:55:09 compute-0 nova_compute[189485]: 2025-11-29 15:55:09.641 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.134 189489 INFO nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Creating config drive at /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.config#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.145 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2x4_83a execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.292 189489 DEBUG oslo_concurrency.processutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpm2x4_83a" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:10 compute-0 kernel: tapf2519551-d7: entered promiscuous mode
Nov 29 15:55:10 compute-0 NetworkManager[56360]: <info>  [1764431710.3950] manager: (tapf2519551-d7): new Tun device (/org/freedesktop/NetworkManager/Devices/75)
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.395 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:10 compute-0 ovn_controller[97827]: 2025-11-29T15:55:10Z|00163|binding|INFO|Claiming lport f2519551-d78d-4d96-b57a-13c24687d7d6 for this chassis.
Nov 29 15:55:10 compute-0 ovn_controller[97827]: 2025-11-29T15:55:10Z|00164|binding|INFO|f2519551-d78d-4d96-b57a-13c24687d7d6: Claiming fa:16:3e:4f:96:51 10.100.0.4
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.409 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:96:51 10.100.0.4'], port_security=['fa:16:3e:4f:96:51 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e88c51da-0fd1-40c7-9084-fb672a0ac109', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aac53958ac1141be8c52323cdbc3e956', 'neutron:revision_number': '2', 'neutron:security_group_ids': '3ddd312c-8d2b-43f5-b273-508a1341c04d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32ea6e1f-12a5-46ef-82e5-118dabc8eb05, chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=f2519551-d78d-4d96-b57a-13c24687d7d6) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.413 106713 INFO neutron.agent.ovn.metadata.agent [-] Port f2519551-d78d-4d96-b57a-13c24687d7d6 in datapath 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b bound to our chassis#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.417 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.431 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:10 compute-0 ovn_controller[97827]: 2025-11-29T15:55:10Z|00165|binding|INFO|Setting lport f2519551-d78d-4d96-b57a-13c24687d7d6 ovn-installed in OVS
Nov 29 15:55:10 compute-0 ovn_controller[97827]: 2025-11-29T15:55:10Z|00166|binding|INFO|Setting lport f2519551-d78d-4d96-b57a-13c24687d7d6 up in Southbound
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.447 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[846e0c05-0a3f-4da8-95fe-425d82deaeed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.447 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:10 compute-0 systemd-machined[155802]: New machine qemu-16-instance-0000000f.
Nov 29 15:55:10 compute-0 systemd-udevd[254657]: Network interface NamePolicy= disabled on kernel command line.
Nov 29 15:55:10 compute-0 systemd[1]: Started Virtual Machine qemu-16-instance-0000000f.
Nov 29 15:55:10 compute-0 NetworkManager[56360]: <info>  [1764431710.4838] device (tapf2519551-d7): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Nov 29 15:55:10 compute-0 NetworkManager[56360]: <info>  [1764431710.4846] device (tapf2519551-d7): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.503 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[7ee80750-d701-450e-8a0d-2a2b12cc76b6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.506 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[79779dc7-9756-4c29-80ae-579516cb0184]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.548 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[77d62bbc-480d-407d-9221-3738ca78cd29]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.576 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[425b1962-1777-4dcb-a946-46f3ba178aaa]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b5208cc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:79:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540694, 'reachable_time': 41561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254669, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.604 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[88a5d746-d12a-4348-8296-833327a86229]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9b5208cc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540706, 'tstamp': 540706}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254671, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9b5208cc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540708, 'tstamp': 540708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254671, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.606 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b5208cc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.609 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.611 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b5208cc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.612 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.612 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b5208cc-e0, col_values=(('external_ids', {'iface-id': '4b21e6be-af46-463f-9bba-3aa8bb5c67fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:10 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:10.613 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:55:10 compute-0 nova_compute[189485]: 2025-11-29 15:55:10.779 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.037 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431711.0368576, e88c51da-0fd1-40c7-9084-fb672a0ac109 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.039 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] VM Started (Lifecycle Event)#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.072 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.088 189489 DEBUG nova.compute.manager [req-35456600-8592-441e-89d1-09285dc7a884 req-d40d406a-118e-411e-aed2-927d56df4d6b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.090 189489 DEBUG oslo_concurrency.lockutils [req-35456600-8592-441e-89d1-09285dc7a884 req-d40d406a-118e-411e-aed2-927d56df4d6b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.091 189489 DEBUG oslo_concurrency.lockutils [req-35456600-8592-441e-89d1-09285dc7a884 req-d40d406a-118e-411e-aed2-927d56df4d6b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.092 189489 DEBUG oslo_concurrency.lockutils [req-35456600-8592-441e-89d1-09285dc7a884 req-d40d406a-118e-411e-aed2-927d56df4d6b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.093 189489 DEBUG nova.compute.manager [req-35456600-8592-441e-89d1-09285dc7a884 req-d40d406a-118e-411e-aed2-927d56df4d6b 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Processing event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.094 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.096 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431711.0369968, e88c51da-0fd1-40c7-9084-fb672a0ac109 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.097 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] VM Paused (Lifecycle Event)#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.101 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.108 189489 INFO nova.virt.libvirt.driver [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Instance spawned successfully.#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.108 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.119 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.127 189489 DEBUG nova.virt.driver [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] Emitting event <LifecycleEvent: 1764431711.1020327, e88c51da-0fd1-40c7-9084-fb672a0ac109 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.127 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] VM Resumed (Lifecycle Event)#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.149 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.151 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.152 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.153 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.154 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.155 189489 DEBUG nova.virt.libvirt.driver [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.161 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.167 189489 DEBUG nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.197 189489 INFO nova.compute.manager [None req-91ee696e-c8e5-48a0-a7e4-5b3bf19c7bc5 - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.214 189489 DEBUG nova.network.neutron [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Updated VIF entry in instance network info cache for port f2519551-d78d-4d96-b57a-13c24687d7d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.214 189489 DEBUG nova.network.neutron [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Updating instance_info_cache with network_info: [{"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.235 189489 DEBUG oslo_concurrency.lockutils [req-fa522f17-e952-4e4d-acae-5f4c781cb77f req-5d9fd6d1-adc9-495c-87d6-7bf9499e6383 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.241 189489 INFO nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Took 6.71 seconds to spawn the instance on the hypervisor.#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.242 189489 DEBUG nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.316 189489 INFO nova.compute.manager [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Took 7.56 seconds to build instance.#033[00m
Nov 29 15:55:11 compute-0 nova_compute[189485]: 2025-11-29 15:55:11.335 189489 DEBUG oslo_concurrency.lockutils [None req-84b33d20-d9ef-4dab-80ce-1bf86f02261c 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.662s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:11 compute-0 podman[254679]: 2025-11-29 15:55:11.65616556 +0000 UTC m=+0.094379628 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 15:55:13 compute-0 nova_compute[189485]: 2025-11-29 15:55:13.242 189489 DEBUG nova.compute.manager [req-91df76d2-fa2e-47d7-843b-a7bfa76d02c4 req-f8e1e444-dde0-404a-a192-15a85ebde934 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:13 compute-0 nova_compute[189485]: 2025-11-29 15:55:13.243 189489 DEBUG oslo_concurrency.lockutils [req-91df76d2-fa2e-47d7-843b-a7bfa76d02c4 req-f8e1e444-dde0-404a-a192-15a85ebde934 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:13 compute-0 nova_compute[189485]: 2025-11-29 15:55:13.244 189489 DEBUG oslo_concurrency.lockutils [req-91df76d2-fa2e-47d7-843b-a7bfa76d02c4 req-f8e1e444-dde0-404a-a192-15a85ebde934 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:13 compute-0 nova_compute[189485]: 2025-11-29 15:55:13.244 189489 DEBUG oslo_concurrency.lockutils [req-91df76d2-fa2e-47d7-843b-a7bfa76d02c4 req-f8e1e444-dde0-404a-a192-15a85ebde934 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:13 compute-0 nova_compute[189485]: 2025-11-29 15:55:13.245 189489 DEBUG nova.compute.manager [req-91df76d2-fa2e-47d7-843b-a7bfa76d02c4 req-f8e1e444-dde0-404a-a192-15a85ebde934 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] No waiting events found dispatching network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:55:13 compute-0 nova_compute[189485]: 2025-11-29 15:55:13.245 189489 WARNING nova.compute.manager [req-91df76d2-fa2e-47d7-843b-a7bfa76d02c4 req-f8e1e444-dde0-404a-a192-15a85ebde934 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received unexpected event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 for instance with vm_state active and task_state None.#033[00m
Nov 29 15:55:13 compute-0 podman[254699]: 2025-11-29 15:55:13.676791461 +0000 UTC m=+0.119052932 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:55:14 compute-0 nova_compute[189485]: 2025-11-29 15:55:14.223 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:14 compute-0 nova_compute[189485]: 2025-11-29 15:55:14.933 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:14 compute-0 NetworkManager[56360]: <info>  [1764431714.9341] manager: (patch-br-int-to-provnet-902f0f77-8c45-4eff-be74-67c45c992175): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/76)
Nov 29 15:55:14 compute-0 NetworkManager[56360]: <info>  [1764431714.9377] manager: (patch-provnet-902f0f77-8c45-4eff-be74-67c45c992175-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/77)
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.071 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:15 compute-0 ovn_controller[97827]: 2025-11-29T15:55:15Z|00167|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:55:15 compute-0 ovn_controller[97827]: 2025-11-29T15:55:15Z|00168|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.088 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.492 189489 DEBUG nova.compute.manager [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-changed-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.493 189489 DEBUG nova.compute.manager [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Refreshing instance network info cache due to event network-changed-f2519551-d78d-4d96-b57a-13c24687d7d6. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.493 189489 DEBUG oslo_concurrency.lockutils [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.494 189489 DEBUG oslo_concurrency.lockutils [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquired lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.494 189489 DEBUG nova.network.neutron [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Refreshing network info cache for port f2519551-d78d-4d96-b57a-13c24687d7d6 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Nov 29 15:55:15 compute-0 nova_compute[189485]: 2025-11-29 15:55:15.781 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:17 compute-0 nova_compute[189485]: 2025-11-29 15:55:17.880 189489 DEBUG nova.network.neutron [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Updated VIF entry in instance network info cache for port f2519551-d78d-4d96-b57a-13c24687d7d6. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Nov 29 15:55:17 compute-0 nova_compute[189485]: 2025-11-29 15:55:17.881 189489 DEBUG nova.network.neutron [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Updating instance_info_cache with network_info: [{"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:55:17 compute-0 nova_compute[189485]: 2025-11-29 15:55:17.911 189489 DEBUG oslo_concurrency.lockutils [req-6340a81c-a7ad-405e-9df9-9db4b7d91083 req-323b0306-6af9-4203-b9ee-a0bff4d880ac 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Releasing lock "refresh_cache-e88c51da-0fd1-40c7-9084-fb672a0ac109" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:55:19 compute-0 nova_compute[189485]: 2025-11-29 15:55:19.226 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:19 compute-0 nova_compute[189485]: 2025-11-29 15:55:19.487 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:20 compute-0 nova_compute[189485]: 2025-11-29 15:55:20.783 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:21 compute-0 nova_compute[189485]: 2025-11-29 15:55:21.712 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:22 compute-0 ovn_controller[97827]: 2025-11-29T15:55:22Z|00023|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:0e:87:f3 10.100.0.182
Nov 29 15:55:22 compute-0 ovn_controller[97827]: 2025-11-29T15:55:22Z|00024|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:0e:87:f3 10.100.0.182
Nov 29 15:55:23 compute-0 nova_compute[189485]: 2025-11-29 15:55:23.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:24 compute-0 nova_compute[189485]: 2025-11-29 15:55:24.228 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:24 compute-0 nova_compute[189485]: 2025-11-29 15:55:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:24 compute-0 nova_compute[189485]: 2025-11-29 15:55:24.498 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:55:24 compute-0 nova_compute[189485]: 2025-11-29 15:55:24.499 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:55:25 compute-0 nova_compute[189485]: 2025-11-29 15:55:25.786 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:25 compute-0 nova_compute[189485]: 2025-11-29 15:55:25.821 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:55:25 compute-0 nova_compute[189485]: 2025-11-29 15:55:25.822 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:55:25 compute-0 nova_compute[189485]: 2025-11-29 15:55:25.823 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:55:25 compute-0 nova_compute[189485]: 2025-11-29 15:55:25.825 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:55:26 compute-0 nova_compute[189485]: 2025-11-29 15:55:26.553 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:26 compute-0 podman[254729]: 2025-11-29 15:55:26.668341176 +0000 UTC m=+0.107135921 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.231 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:29 compute-0 podman[203677]: time="2025-11-29T15:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:55:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30757 "" "Go-http-client/1.1"
Nov 29 15:55:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5265 "" "Go-http-client/1.1"
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.923 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.953 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.953 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.953 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.954 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.954 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.954 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.983 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.984 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.984 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:29 compute-0 nova_compute[189485]: 2025-11-29 15:55:29.984 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.130 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.201 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.203 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.278 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.291 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.355 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.357 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.431 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.444 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.510 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.512 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.578 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.591 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.670 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.673 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.738 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:55:30 compute-0 nova_compute[189485]: 2025-11-29 15:55:30.789 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.131 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.132 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4698MB free_disk=72.2196044921875GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.133 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.133 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.231 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.232 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance f8649788-26c9-4497-a517-f989c3c9cdb7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.232 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.233 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance e88c51da-0fd1-40c7-9084-fb672a0ac109 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.233 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.233 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.367 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.386 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: ERROR   15:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: ERROR   15:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: ERROR   15:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: ERROR   15:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:55:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.428 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.437 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.968 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.969 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:31 compute-0 nova_compute[189485]: 2025-11-29 15:55:31.970 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:55:34 compute-0 nova_compute[189485]: 2025-11-29 15:55:34.233 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:34 compute-0 ovn_controller[97827]: 2025-11-29T15:55:34Z|00169|binding|INFO|Releasing lport 4b21e6be-af46-463f-9bba-3aa8bb5c67fb from this chassis (sb_readonly=0)
Nov 29 15:55:34 compute-0 ovn_controller[97827]: 2025-11-29T15:55:34Z|00170|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:55:35 compute-0 nova_compute[189485]: 2025-11-29 15:55:35.045 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:35 compute-0 nova_compute[189485]: 2025-11-29 15:55:35.794 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:36 compute-0 podman[254778]: 2025-11-29 15:55:36.673885047 +0000 UTC m=+0.108567000 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Nov 29 15:55:38 compute-0 podman[254803]: 2025-11-29 15:55:38.674953142 +0000 UTC m=+0.108348945 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=edpm, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:55:38 compute-0 podman[254796]: 2025-11-29 15:55:38.690328665 +0000 UTC m=+0.128822355 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Nov 29 15:55:38 compute-0 podman[254795]: 2025-11-29 15:55:38.721502154 +0000 UTC m=+0.168109561 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=edpm, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, vcs-type=git, distribution-scope=public, name=ubi9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543)
Nov 29 15:55:38 compute-0 podman[254798]: 2025-11-29 15:55:38.72545668 +0000 UTC m=+0.134532579 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Nov 29 15:55:38 compute-0 podman[254797]: 2025-11-29 15:55:38.750450972 +0000 UTC m=+0.171431271 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi)
Nov 29 15:55:39 compute-0 nova_compute[189485]: 2025-11-29 15:55:39.235 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:40 compute-0 nova_compute[189485]: 2025-11-29 15:55:40.796 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:42 compute-0 nova_compute[189485]: 2025-11-29 15:55:42.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:55:42 compute-0 ovn_controller[97827]: 2025-11-29T15:55:42Z|00025|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:4f:96:51 10.100.0.4
Nov 29 15:55:42 compute-0 ovn_controller[97827]: 2025-11-29T15:55:42Z|00026|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:4f:96:51 10.100.0.4
Nov 29 15:55:42 compute-0 podman[254901]: 2025-11-29 15:55:42.672960762 +0000 UTC m=+0.118418036 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 15:55:44 compute-0 nova_compute[189485]: 2025-11-29 15:55:44.239 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:44 compute-0 podman[254919]: 2025-11-29 15:55:44.68789672 +0000 UTC m=+0.134687793 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:55:45 compute-0 nova_compute[189485]: 2025-11-29 15:55:45.800 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.583 189489 INFO nova.compute.manager [None req-f4ad34cd-aa13-4fa2-b91b-e689a7495d9d 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Get console output#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.592 239607 INFO nova.privsep.libvirt [-] Ignored error while reading from instance console pty: can't concat NoneType to bytes#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.898 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.899 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.900 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.900 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.901 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.902 189489 INFO nova.compute.manager [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Terminating instance#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.903 189489 DEBUG nova.compute.manager [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:55:48 compute-0 kernel: tapf2519551-d7 (unregistering): left promiscuous mode
Nov 29 15:55:48 compute-0 NetworkManager[56360]: <info>  [1764431748.9501] device (tapf2519551-d7): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:55:48 compute-0 ovn_controller[97827]: 2025-11-29T15:55:48Z|00171|binding|INFO|Releasing lport f2519551-d78d-4d96-b57a-13c24687d7d6 from this chassis (sb_readonly=0)
Nov 29 15:55:48 compute-0 ovn_controller[97827]: 2025-11-29T15:55:48Z|00172|binding|INFO|Setting lport f2519551-d78d-4d96-b57a-13c24687d7d6 down in Southbound
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.965 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:48 compute-0 ovn_controller[97827]: 2025-11-29T15:55:48Z|00173|binding|INFO|Removing iface tapf2519551-d7 ovn-installed in OVS
Nov 29 15:55:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:48.980 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4f:96:51 10.100.0.4'], port_security=['fa:16:3e:4f:96:51 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': 'e88c51da-0fd1-40c7-9084-fb672a0ac109', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aac53958ac1141be8c52323cdbc3e956', 'neutron:revision_number': '4', 'neutron:security_group_ids': '3ddd312c-8d2b-43f5-b273-508a1341c04d', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.173'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32ea6e1f-12a5-46ef-82e5-118dabc8eb05, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=f2519551-d78d-4d96-b57a-13c24687d7d6) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:55:48 compute-0 nova_compute[189485]: 2025-11-29 15:55:48.983 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:48.985 106713 INFO neutron.agent.ovn.metadata.agent [-] Port f2519551-d78d-4d96-b57a-13c24687d7d6 in datapath 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b unbound from our chassis#033[00m
Nov 29 15:55:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:48.987 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b#033[00m
Nov 29 15:55:49 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Deactivated successfully.
Nov 29 15:55:49 compute-0 systemd[1]: machine-qemu\x2d16\x2dinstance\x2d0000000f.scope: Consumed 33.841s CPU time.
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.005 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f2f0c9cd-a358-4c89-9291-1665ce526e9b]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:49 compute-0 systemd-machined[155802]: Machine qemu-16-instance-0000000f terminated.
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.274 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.285 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[3c33d2d0-54fa-4bd4-a1d4-a66fc850b441]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.289 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[7068a919-eba6-4e08-aa10-c7a041385d32]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.289 189489 DEBUG nova.compute.manager [req-b571fc00-e5f1-435b-9858-467227e8622b req-680fe2b6-7ea0-43c6-96b4-f8e60de92006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-vif-unplugged-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.290 189489 DEBUG oslo_concurrency.lockutils [req-b571fc00-e5f1-435b-9858-467227e8622b req-680fe2b6-7ea0-43c6-96b4-f8e60de92006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.290 189489 DEBUG oslo_concurrency.lockutils [req-b571fc00-e5f1-435b-9858-467227e8622b req-680fe2b6-7ea0-43c6-96b4-f8e60de92006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.291 189489 DEBUG oslo_concurrency.lockutils [req-b571fc00-e5f1-435b-9858-467227e8622b req-680fe2b6-7ea0-43c6-96b4-f8e60de92006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.291 189489 DEBUG nova.compute.manager [req-b571fc00-e5f1-435b-9858-467227e8622b req-680fe2b6-7ea0-43c6-96b4-f8e60de92006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] No waiting events found dispatching network-vif-unplugged-f2519551-d78d-4d96-b57a-13c24687d7d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.291 189489 DEBUG nova.compute.manager [req-b571fc00-e5f1-435b-9858-467227e8622b req-680fe2b6-7ea0-43c6-96b4-f8e60de92006 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-vif-unplugged-f2519551-d78d-4d96-b57a-13c24687d7d6 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.292 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.297 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.320 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[9aafb9c9-5639-4fbf-9fbc-761ddfb8b1ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.338 189489 INFO nova.virt.libvirt.driver [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Instance destroyed successfully.#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.338 189489 DEBUG nova.objects.instance [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lazy-loading 'resources' on Instance uuid e88c51da-0fd1-40c7-9084-fb672a0ac109 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.339 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d69571e4-ded4-47d6-a450-8d298f52d0a9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9b5208cc-e1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:06:79:97'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 43], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540694, 'reachable_time': 41561, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254983, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.355 189489 DEBUG nova.virt.libvirt.vif [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:55:02Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1759961201',display_name='tempest-TestNetworkBasicOps-server-1759961201',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1759961201',id=15,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBJZy2hvTYFO/rYWDP0SPQtmW14+hvIgoA8FFJMbb720PdMfA9owmAb/O98hPijQ8mmc3EFgtLFDl3IaUuyfi9u9aOm0NyLvIfNjgQtC1NwsBVMqXTkP8qYk1Tg6wQU2zSg==',key_name='tempest-TestNetworkBasicOps-882265342',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:55:11Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aac53958ac1141be8c52323cdbc3e956',ramdisk_id='',reservation_id='r-2ryrudo0',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-729114730',owner_user_name='tempest-TestNetworkBasicOps-729114730-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:55:11Z,user_data=None,user_id='08fa71399ec746088caaa6ce113cf5bc',uuid=e88c51da-0fd1-40c7-9084-fb672a0ac109,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.356 189489 DEBUG nova.network.os_vif_util [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converting VIF {"id": "f2519551-d78d-4d96-b57a-13c24687d7d6", "address": "fa:16:3e:4f:96:51", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.173", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf2519551-d7", "ovs_interfaceid": "f2519551-d78d-4d96-b57a-13c24687d7d6", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.356 189489 DEBUG nova.network.os_vif_util [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.357 189489 DEBUG os_vif [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.358 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.358 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf2519551-d7, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.358 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[30f3750e-c2fa-48f6-9002-17c437448ee8]: (4, ({'family': 2, 'prefixlen': 28, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.0.15'], ['IFA_LABEL', 'tap9b5208cc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540706, 'tstamp': 540706}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254984, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap9b5208cc-e1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 540708, 'tstamp': 540708}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 254984, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.360 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b5208cc-e0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.360 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.361 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.363 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.363 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9b5208cc-e0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.363 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.364 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9b5208cc-e0, col_values=(('external_ids', {'iface-id': '4b21e6be-af46-463f-9bba-3aa8bb5c67fb'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:49 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:49.364 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.365 189489 INFO os_vif [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:4f:96:51,bridge_name='br-int',has_traffic_filtering=True,id=f2519551-d78d-4d96-b57a-13c24687d7d6,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapf2519551-d7')#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.366 189489 INFO nova.virt.libvirt.driver [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Deleting instance files /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109_del#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.367 189489 INFO nova.virt.libvirt.driver [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Deletion of /var/lib/nova/instances/e88c51da-0fd1-40c7-9084-fb672a0ac109_del complete#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.452 189489 INFO nova.compute.manager [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Took 0.55 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.452 189489 DEBUG oslo.service.loopingcall [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.453 189489 DEBUG nova.compute.manager [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:55:49 compute-0 nova_compute[189485]: 2025-11-29 15:55:49.453 189489 DEBUG nova.network.neutron [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:55:50 compute-0 nova_compute[189485]: 2025-11-29 15:55:50.801 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.144 189489 DEBUG nova.network.neutron [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.178 189489 INFO nova.compute.manager [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Took 1.72 seconds to deallocate network for instance.#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.267 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.268 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.288 189489 DEBUG nova.compute.manager [req-bd777ebf-589b-4997-8047-41689b59b0fe req-2c52ce56-caf2-46a7-9a9d-93ac8e51c00d 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-vif-deleted-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.410 189489 DEBUG nova.compute.manager [req-90cffe85-9e0e-4db1-8239-8f26937abdfc req-27f60ce7-401d-4eb2-bf90-678d623b0084 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.410 189489 DEBUG oslo_concurrency.lockutils [req-90cffe85-9e0e-4db1-8239-8f26937abdfc req-27f60ce7-401d-4eb2-bf90-678d623b0084 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.411 189489 DEBUG oslo_concurrency.lockutils [req-90cffe85-9e0e-4db1-8239-8f26937abdfc req-27f60ce7-401d-4eb2-bf90-678d623b0084 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.411 189489 DEBUG oslo_concurrency.lockutils [req-90cffe85-9e0e-4db1-8239-8f26937abdfc req-27f60ce7-401d-4eb2-bf90-678d623b0084 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.411 189489 DEBUG nova.compute.manager [req-90cffe85-9e0e-4db1-8239-8f26937abdfc req-27f60ce7-401d-4eb2-bf90-678d623b0084 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] No waiting events found dispatching network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.412 189489 WARNING nova.compute.manager [req-90cffe85-9e0e-4db1-8239-8f26937abdfc req-27f60ce7-401d-4eb2-bf90-678d623b0084 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Received unexpected event network-vif-plugged-f2519551-d78d-4d96-b57a-13c24687d7d6 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.450 189489 DEBUG nova.compute.provider_tree [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.465 189489 DEBUG nova.scheduler.client.report [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.488 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.525 189489 INFO nova.scheduler.client.report [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Deleted allocations for instance e88c51da-0fd1-40c7-9084-fb672a0ac109#033[00m
Nov 29 15:55:51 compute-0 nova_compute[189485]: 2025-11-29 15:55:51.610 189489 DEBUG oslo_concurrency.lockutils [None req-607f8f78-d5df-4503-934b-c635a49aff7a 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "e88c51da-0fd1-40c7-9084-fb672a0ac109" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.711s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.655 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.656 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.656 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.657 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.657 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.659 189489 INFO nova.compute.manager [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Terminating instance#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.660 189489 DEBUG nova.compute.manager [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 15:55:53 compute-0 kernel: tapbc8a9aec-d4 (unregistering): left promiscuous mode
Nov 29 15:55:53 compute-0 NetworkManager[56360]: <info>  [1764431753.7166] device (tapbc8a9aec-d4): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.723 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:53 compute-0 ovn_controller[97827]: 2025-11-29T15:55:53Z|00174|binding|INFO|Releasing lport bc8a9aec-d49d-411d-8b11-6c05461f6ed4 from this chassis (sb_readonly=0)
Nov 29 15:55:53 compute-0 ovn_controller[97827]: 2025-11-29T15:55:53Z|00175|binding|INFO|Setting lport bc8a9aec-d49d-411d-8b11-6c05461f6ed4 down in Southbound
Nov 29 15:55:53 compute-0 ovn_controller[97827]: 2025-11-29T15:55:53Z|00176|binding|INFO|Removing iface tapbc8a9aec-d4 ovn-installed in OVS
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.726 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:53.732 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:7e:5f:3b 10.100.0.10'], port_security=['fa:16:3e:7e:5f:3b 10.100.0.10'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.10/28', 'neutron:device_id': 'f8649788-26c9-4497-a517-f989c3c9cdb7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'aac53958ac1141be8c52323cdbc3e956', 'neutron:revision_number': '4', 'neutron:security_group_ids': '6406711a-fc6c-4239-9b58-d82b897202ce', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=32ea6e1f-12a5-46ef-82e5-118dabc8eb05, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=bc8a9aec-d49d-411d-8b11-6c05461f6ed4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:55:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:53.733 106713 INFO neutron.agent.ovn.metadata.agent [-] Port bc8a9aec-d49d-411d-8b11-6c05461f6ed4 in datapath 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b unbound from our chassis#033[00m
Nov 29 15:55:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:53.735 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 15:55:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:53.736 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[f330782c-e155-4bb0-a022-366015cd9599]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:53 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:53.737 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b namespace which is not needed anymore#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.748 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:53 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Nov 29 15:55:53 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 45.984s CPU time.
Nov 29 15:55:53 compute-0 systemd-machined[155802]: Machine qemu-14-instance-0000000d terminated.
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.937 189489 INFO nova.virt.libvirt.driver [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Instance destroyed successfully.#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.938 189489 DEBUG nova.objects.instance [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lazy-loading 'resources' on Instance uuid f8649788-26c9-4497-a517-f989c3c9cdb7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:55:53 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [NOTICE]   (254108) : haproxy version is 2.8.14-c23fe91
Nov 29 15:55:53 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [NOTICE]   (254108) : path to executable is /usr/sbin/haproxy
Nov 29 15:55:53 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [WARNING]  (254108) : Exiting Master process...
Nov 29 15:55:53 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [ALERT]    (254108) : Current worker (254111) exited with code 143 (Terminated)
Nov 29 15:55:53 compute-0 neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b[254090]: [WARNING]  (254108) : All workers exited. Exiting... (0)
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.953 189489 DEBUG nova.virt.libvirt.vif [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:54:03Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestNetworkBasicOps-server-1911938473',display_name='tempest-TestNetworkBasicOps-server-1911938473',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testnetworkbasicops-server-1911938473',id=13,image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBLQHqtawrjL2wQM17CQzJFmBeXoduG4angmB0jo9/RQYpY+v/NgXODpz5JsRknVFMlKfiC+y5ptrvfJjydPALtpgesZrfIdXd90qxXP6XvXJafN6f5SdFPOHokIZP8lIqQ==',key_name='tempest-TestNetworkBasicOps-1298186890',keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:54:12Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='aac53958ac1141be8c52323cdbc3e956',ramdisk_id='',reservation_id='r-n125kngd',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='6a931c3a-089f-4276-ac71-a0da3ffce7c7',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestNetworkBasicOps-729114730',owner_user_name='tempest-TestNetworkBasicOps-729114730-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:54:12Z,user_data=None,user_id='08fa71399ec746088caaa6ce113cf5bc',uuid=f8649788-26c9-4497-a517-f989c3c9cdb7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.954 189489 DEBUG nova.network.os_vif_util [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converting VIF {"id": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "address": "fa:16:3e:7e:5f:3b", "network": {"id": "9b5208cc-e5fa-4a99-99d7-6c6537b56a0b", "bridge": "br-int", "label": "tempest-network-smoke--744038075", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "aac53958ac1141be8c52323cdbc3e956", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapbc8a9aec-d4", "ovs_interfaceid": "bc8a9aec-d49d-411d-8b11-6c05461f6ed4", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.954 189489 DEBUG nova.network.os_vif_util [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.954 189489 DEBUG os_vif [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.956 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:53 compute-0 systemd[1]: libpod-35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b.scope: Deactivated successfully.
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.956 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapbc8a9aec-d4, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.958 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:53 compute-0 conmon[254090]: conmon 35571bc125013cbff131 <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b.scope/container/memory.events
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.960 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.960 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:53 compute-0 podman[255008]: 2025-11-29 15:55:53.962192206 +0000 UTC m=+0.086435516 container died 35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.963 189489 INFO os_vif [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:7e:5f:3b,bridge_name='br-int',has_traffic_filtering=True,id=bc8a9aec-d49d-411d-8b11-6c05461f6ed4,network=Network(9b5208cc-e5fa-4a99-99d7-6c6537b56a0b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapbc8a9aec-d4')#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.964 189489 INFO nova.virt.libvirt.driver [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Deleting instance files /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7_del#033[00m
Nov 29 15:55:53 compute-0 nova_compute[189485]: 2025-11-29 15:55:53.964 189489 INFO nova.virt.libvirt.driver [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Deletion of /var/lib/nova/instances/f8649788-26c9-4497-a517-f989c3c9cdb7_del complete#033[00m
Nov 29 15:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay-f8583d1591c6ddab750aff0a60473c25f23807332d9ccac6a64e9a81bd135267-merged.mount: Deactivated successfully.
Nov 29 15:55:54 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b-userdata-shm.mount: Deactivated successfully.
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.028 189489 INFO nova.compute.manager [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.028 189489 DEBUG oslo.service.loopingcall [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.028 189489 DEBUG nova.compute.manager [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.028 189489 DEBUG nova.network.neutron [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 15:55:54 compute-0 podman[255008]: 2025-11-29 15:55:54.029787583 +0000 UTC m=+0.154030883 container cleanup 35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 15:55:54 compute-0 systemd[1]: libpod-conmon-35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b.scope: Deactivated successfully.
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.096 189489 DEBUG nova.compute.manager [req-83ed0874-2ec1-4c8f-b018-518246564f9a req-81ba1684-a2f0-4536-80a0-74227a8546fc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-vif-unplugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.097 189489 DEBUG oslo_concurrency.lockutils [req-83ed0874-2ec1-4c8f-b018-518246564f9a req-81ba1684-a2f0-4536-80a0-74227a8546fc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.097 189489 DEBUG oslo_concurrency.lockutils [req-83ed0874-2ec1-4c8f-b018-518246564f9a req-81ba1684-a2f0-4536-80a0-74227a8546fc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.098 189489 DEBUG oslo_concurrency.lockutils [req-83ed0874-2ec1-4c8f-b018-518246564f9a req-81ba1684-a2f0-4536-80a0-74227a8546fc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.098 189489 DEBUG nova.compute.manager [req-83ed0874-2ec1-4c8f-b018-518246564f9a req-81ba1684-a2f0-4536-80a0-74227a8546fc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] No waiting events found dispatching network-vif-unplugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.099 189489 DEBUG nova.compute.manager [req-83ed0874-2ec1-4c8f-b018-518246564f9a req-81ba1684-a2f0-4536-80a0-74227a8546fc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-vif-unplugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 15:55:54 compute-0 podman[255053]: 2025-11-29 15:55:54.106499636 +0000 UTC m=+0.053051207 container remove 35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.119 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[36887fee-c07b-4c1e-8a3e-777a1e03dbd1]: (4, ('Sat Nov 29 03:55:53 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b (35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b)\n35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b\nSat Nov 29 03:55:54 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b (35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b)\n35571bc125013cbff1318dc9153fb8d66195955b36047bb28b7012645019c46b\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.121 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[fcfa6c67-e04f-49d7-8da4-854477af9c23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.123 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9b5208cc-e0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.125 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:54 compute-0 kernel: tap9b5208cc-e0: left promiscuous mode
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.148 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.149 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9ea2c409-b2b5-4407-b92c-b7c3913edac5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.164 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[8f1ded22-fd5e-40c1-a191-937e6cc566ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.165 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[41672cbb-e2a7-4aca-8c4d-88697822756a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.183 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[abc40454-fa90-4617-856e-371e745107de]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 540686, 'reachable_time': 36861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255069, 'error': None, 'target': 'ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 systemd[1]: run-netns-ovnmeta\x2d9b5208cc\x2de5fa\x2d4a99\x2d99d7\x2d6c6537b56a0b.mount: Deactivated successfully.
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.191 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9b5208cc-e5fa-4a99-99d7-6c6537b56a0b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 15:55:54 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:54.191 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[1841a275-f78a-457e-b3ca-2f259489474f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.494 189489 DEBUG nova.network.neutron [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.512 189489 INFO nova.compute.manager [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Took 0.48 seconds to deallocate network for instance.#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.581 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.582 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.698 189489 DEBUG nova.compute.provider_tree [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.712 189489 DEBUG nova.scheduler.client.report [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.731 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.753 189489 INFO nova.scheduler.client.report [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Deleted allocations for instance f8649788-26c9-4497-a517-f989c3c9cdb7#033[00m
Nov 29 15:55:54 compute-0 nova_compute[189485]: 2025-11-29 15:55:54.840 189489 DEBUG oslo_concurrency.lockutils [None req-63bb8c57-45e1-4b39-8194-3dfbcfc8e1a0 08fa71399ec746088caaa6ce113cf5bc aac53958ac1141be8c52323cdbc3e956 - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:55 compute-0 nova_compute[189485]: 2025-11-29 15:55:55.805 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.257 189489 DEBUG nova.compute.manager [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.257 189489 DEBUG oslo_concurrency.lockutils [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.257 189489 DEBUG oslo_concurrency.lockutils [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.257 189489 DEBUG oslo_concurrency.lockutils [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "f8649788-26c9-4497-a517-f989c3c9cdb7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.257 189489 DEBUG nova.compute.manager [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] No waiting events found dispatching network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.258 189489 WARNING nova.compute.manager [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received unexpected event network-vif-plugged-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 for instance with vm_state deleted and task_state None.#033[00m
Nov 29 15:55:56 compute-0 nova_compute[189485]: 2025-11-29 15:55:56.258 189489 DEBUG nova.compute.manager [req-3668d83a-e5c4-48ee-afdd-96d511a35018 req-1d4368a6-49e9-4071-85af-f267afbb72cc 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Received event network-vif-deleted-bc8a9aec-d49d-411d-8b11-6c05461f6ed4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 15:55:57 compute-0 podman[255070]: 2025-11-29 15:55:57.673417042 +0000 UTC m=+0.107149282 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:55:58 compute-0 nova_compute[189485]: 2025-11-29 15:55:58.844 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:58 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:58.843 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 15:55:58 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:58.849 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 15:55:58 compute-0 nova_compute[189485]: 2025-11-29 15:55:58.959 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:55:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:59.213 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:55:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:59.225 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.012s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:55:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:55:59.227 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:55:59 compute-0 podman[203677]: time="2025-11-29T15:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:55:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:55:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 29 15:55:59 compute-0 ovn_controller[97827]: 2025-11-29T15:55:59Z|00177|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:56:00 compute-0 nova_compute[189485]: 2025-11-29 15:56:00.005 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:00 compute-0 ovn_controller[97827]: 2025-11-29T15:56:00Z|00178|binding|INFO|Releasing lport 44ccce0e-f764-41d1-8796-ff08932a6de2 from this chassis (sb_readonly=0)
Nov 29 15:56:00 compute-0 nova_compute[189485]: 2025-11-29 15:56:00.202 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:00 compute-0 nova_compute[189485]: 2025-11-29 15:56:00.809 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.062 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.063 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c619970>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.081 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.085 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance a1c56ffa-6d1c-408c-8667-517745513fd0 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Nov 29 15:56:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:01.087 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/a1c56ffa-6d1c-408c-8667-517745513fd0 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}21f1b25129fd7f828fba82e66d37137d0fe6cb4aa99b37755c299ad1aab8f053" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: ERROR   15:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: ERROR   15:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: ERROR   15:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: ERROR   15:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:56:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.050 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Sat, 29 Nov 2025 15:56:01 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-2ac6e259-f056-4005-8fea-ab94caa782d6 x-openstack-request-id: req-2ac6e259-f056-4005-8fea-ab94caa782d6 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.051 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "a1c56ffa-6d1c-408c-8667-517745513fd0", "name": "te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo", "status": "ACTIVE", "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "user_id": "997fde32c4f7472e87493536b60e7b64", "metadata": {"metering.server_group": "4838e190-17b5-46fc-b5c5-64e289c1eccb"}, "hostId": "ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a", "image": {"id": "276c0a04-08bd-40bb-ad7b-a0be69fa4466", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/276c0a04-08bd-40bb-ad7b-a0be69fa4466"}]}, "flavor": {"id": "cde1daa0-956a-446c-a1eb-2046e0cd1fa7", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cde1daa0-956a-446c-a1eb-2046e0cd1fa7"}]}, "created": "2025-11-29T15:54:40Z", "updated": "2025-11-29T15:54:49Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.182", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:0e:87:f3"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/a1c56ffa-6d1c-408c-8667-517745513fd0"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/a1c56ffa-6d1c-408c-8667-517745513fd0"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-11-29T15:54:49.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.051 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/a1c56ffa-6d1c-408c-8667-517745513fd0 used request id req-2ac6e259-f056-4005-8fea-ab94caa782d6 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.053 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'name': 'te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.054 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.054 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.054 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.055 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.056 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:56:02.054995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.063 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.070 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for a1c56ffa-6d1c-408c-8667-517745513fd0 / tap05c6eb06-b3 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.071 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.071 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.072 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.072 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.072 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.072 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.073 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.073 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.074 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:56:02.072764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.075 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.076 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:56:02.075420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.114 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: 43.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.153 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/memory.usage volume: 43.82421875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.154 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.155 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:56:02.154954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.156 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes volume: 1346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.157 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.157 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.158 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo>]
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.159 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-11-29T15:56:02.157473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.159 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 168 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.160 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.161 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:56:02.159451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.162 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:56:02.162099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.163 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:56:02.164490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.229 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.229 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.292 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.293 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.296 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:56:02.295754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.296 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.298 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.298 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:56:02.298603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.298 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.319 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.319 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.359 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.360 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.361 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.362 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.363 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:56:02.362605) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.363 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 241580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.363 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/cpu volume: 71270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.365 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:56:02.365792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.366 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 535968866 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.366 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 56326732 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.367 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 639094886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.367 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 59124615 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.368 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.368 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.369 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.369 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.369 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.369 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-11-29T15:56:02.369621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.369 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.370 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.370 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo>]
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.370 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.371 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.371 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.371 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.371 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.372 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.372 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.373 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.373 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.374 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.375 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:56:02.371839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.376 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.376 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:56:02.376391) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.376 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.377 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.378 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.378 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.379 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.379 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.381 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.381 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.382 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 72802304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.382 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.384 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:56:02.380452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.384 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.384 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.385 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 8782275504 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.386 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:56:02.385259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.387 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 3716471872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.388 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.389 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.389 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.390 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:56:02.389836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 308 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.391 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.392 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 305 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.392 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:56:02.391459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.393 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.393 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.394 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:56:02.393828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.394 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.394 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:56:02.395576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.396 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.396 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.396 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.397 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.397 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:56:02.396866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.397 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.397 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.398 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.398 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.398 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.399 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:56:02.399078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.399 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets volume: 10 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.400 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.400 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.400 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.401 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.401 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:56:02.400688) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.401 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.402 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.402 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:56:02.402003) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.402 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.402 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:56:02.403513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.403 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.404 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.404 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:02 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:56:02.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:56:03 compute-0 nova_compute[189485]: 2025-11-29 15:56:03.967 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:04 compute-0 nova_compute[189485]: 2025-11-29 15:56:04.327 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431749.326155, e88c51da-0fd1-40c7-9084-fb672a0ac109 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:56:04 compute-0 nova_compute[189485]: 2025-11-29 15:56:04.328 189489 INFO nova.compute.manager [-] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:56:04 compute-0 nova_compute[189485]: 2025-11-29 15:56:04.354 189489 DEBUG nova.compute.manager [None req-86a85175-80bf-4db1-b6a5-de95ae2fbadc - - - - - -] [instance: e88c51da-0fd1-40c7-9084-fb672a0ac109] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:56:04 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:56:04.853 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 15:56:05 compute-0 nova_compute[189485]: 2025-11-29 15:56:05.814 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:07 compute-0 podman[255095]: 2025-11-29 15:56:07.709979418 +0000 UTC m=+0.135181897 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 15:56:08 compute-0 nova_compute[189485]: 2025-11-29 15:56:08.930 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764431753.9286313, f8649788-26c9-4497-a517-f989c3c9cdb7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 15:56:08 compute-0 nova_compute[189485]: 2025-11-29 15:56:08.931 189489 INFO nova.compute.manager [-] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] VM Stopped (Lifecycle Event)#033[00m
Nov 29 15:56:08 compute-0 nova_compute[189485]: 2025-11-29 15:56:08.964 189489 DEBUG nova.compute.manager [None req-6ca61fba-33d1-429b-9e41-6d1b5e04c8f1 - - - - - -] [instance: f8649788-26c9-4497-a517-f989c3c9cdb7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 15:56:08 compute-0 nova_compute[189485]: 2025-11-29 15:56:08.970 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:09 compute-0 podman[255118]: 2025-11-29 15:56:09.662877887 +0000 UTC m=+0.101678525 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:56:09 compute-0 podman[255116]: 2025-11-29 15:56:09.666523106 +0000 UTC m=+0.103933506 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 29 15:56:09 compute-0 podman[255115]: 2025-11-29 15:56:09.676591396 +0000 UTC m=+0.116147864 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ovn_metadata_agent)
Nov 29 15:56:09 compute-0 podman[255114]: 2025-11-29 15:56:09.695821834 +0000 UTC m=+0.132925456 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, release=1214.1726694543)
Nov 29 15:56:09 compute-0 podman[255117]: 2025-11-29 15:56:09.717518117 +0000 UTC m=+0.152595325 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:56:10 compute-0 nova_compute[189485]: 2025-11-29 15:56:10.817 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:13 compute-0 podman[255208]: 2025-11-29 15:56:13.689010484 +0000 UTC m=+0.129731250 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 15:56:13 compute-0 nova_compute[189485]: 2025-11-29 15:56:13.973 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:14 compute-0 podman[255227]: 2025-11-29 15:56:14.813383112 +0000 UTC m=+0.080975359 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:56:15 compute-0 nova_compute[189485]: 2025-11-29 15:56:15.820 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:18 compute-0 nova_compute[189485]: 2025-11-29 15:56:18.981 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:19 compute-0 nova_compute[189485]: 2025-11-29 15:56:19.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:20 compute-0 nova_compute[189485]: 2025-11-29 15:56:20.823 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:23 compute-0 nova_compute[189485]: 2025-11-29 15:56:23.987 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:24 compute-0 nova_compute[189485]: 2025-11-29 15:56:24.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:24 compute-0 nova_compute[189485]: 2025-11-29 15:56:24.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:56:24 compute-0 nova_compute[189485]: 2025-11-29 15:56:24.828 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:56:24 compute-0 nova_compute[189485]: 2025-11-29 15:56:24.829 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:56:24 compute-0 nova_compute[189485]: 2025-11-29 15:56:24.829 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:56:25 compute-0 nova_compute[189485]: 2025-11-29 15:56:25.828 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.008 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.037 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.038 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.039 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.040 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.041 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.070 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.071 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.071 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.072 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.188 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:56:28 compute-0 podman[255260]: 2025-11-29 15:56:28.238159277 +0000 UTC m=+0.092824447 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.263 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.264 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.326 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.336 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.393 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.394 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.451 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.921 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.923 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4950MB free_disk=72.24894332885742GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.923 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.923 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:56:28 compute-0 nova_compute[189485]: 2025-11-29 15:56:28.992 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.027 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.028 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.029 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.029 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.118 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.138 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.169 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.170 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.246s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.614 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.615 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:29 compute-0 nova_compute[189485]: 2025-11-29 15:56:29.616 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:29 compute-0 podman[203677]: time="2025-11-29T15:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:56:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:56:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Nov 29 15:56:30 compute-0 ovn_controller[97827]: 2025-11-29T15:56:30Z|00179|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 15:56:30 compute-0 nova_compute[189485]: 2025-11-29 15:56:30.831 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: ERROR   15:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: ERROR   15:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: ERROR   15:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: ERROR   15:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:56:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:56:31 compute-0 nova_compute[189485]: 2025-11-29 15:56:31.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:31 compute-0 nova_compute[189485]: 2025-11-29 15:56:31.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:56:33 compute-0 nova_compute[189485]: 2025-11-29 15:56:33.997 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:35 compute-0 nova_compute[189485]: 2025-11-29 15:56:35.834 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:38 compute-0 podman[255298]: 2025-11-29 15:56:38.674748701 +0000 UTC m=+0.105354424 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 15:56:39 compute-0 nova_compute[189485]: 2025-11-29 15:56:39.003 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:40 compute-0 podman[255317]: 2025-11-29 15:56:40.691499018 +0000 UTC m=+0.112477856 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:56:40 compute-0 podman[255318]: 2025-11-29 15:56:40.705814044 +0000 UTC m=+0.121373866 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:56:40 compute-0 podman[255331]: 2025-11-29 15:56:40.71089967 +0000 UTC m=+0.104668726 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc.)
Nov 29 15:56:40 compute-0 podman[255316]: 2025-11-29 15:56:40.727583848 +0000 UTC m=+0.163963950 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Nov 29 15:56:40 compute-0 podman[255324]: 2025-11-29 15:56:40.747187746 +0000 UTC m=+0.153659884 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.build-date=20251125)
Nov 29 15:56:40 compute-0 nova_compute[189485]: 2025-11-29 15:56:40.837 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:44 compute-0 nova_compute[189485]: 2025-11-29 15:56:44.007 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:44 compute-0 podman[255411]: 2025-11-29 15:56:44.641466795 +0000 UTC m=+0.093569726 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:56:45 compute-0 podman[255430]: 2025-11-29 15:56:45.665871525 +0000 UTC m=+0.110732379 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:56:45 compute-0 nova_compute[189485]: 2025-11-29 15:56:45.838 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.484 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.485 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.486 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.487 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.487 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.487 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.508 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.528 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.528 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Image id 276c0a04-08bd-40bb-ad7b-a0be69fa4466 yields fingerprint bc62df192b9cc3765848644231821ffd9bd86fa9 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.529 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] image 276c0a04-08bd-40bb-ad7b-a0be69fa4466 at (/var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9): checking#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.529 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] image 276c0a04-08bd-40bb-ad7b-a0be69fa4466 at (/var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.531 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.532 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] 2c879d1e-7499-4665-8880-438b30ff9d86 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.532 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] 2c879d1e-7499-4665-8880-438b30ff9d86 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.533 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.594 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.595 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 is backed by bc62df192b9cc3765848644231821ffd9bd86fa9 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.595 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] a1c56ffa-6d1c-408c-8667-517745513fd0 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.595 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] a1c56ffa-6d1c-408c-8667-517745513fd0 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.596 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.690 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.691 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 is backed by bc62df192b9cc3765848644231821ffd9bd86fa9 _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.691 189489 WARNING nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.692 189489 WARNING nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.692 189489 WARNING nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.692 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Active base files: /var/lib/nova/instances/_base/bc62df192b9cc3765848644231821ffd9bd86fa9#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.693 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Removable base files: /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525 /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.693 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a7996d50170914c9415f43103aca35ccc26834bd#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.694 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/a9699c1a698d6502fb8d031636af19823e4dc525#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.694 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/c7e712fd6afdf0909a364074b7f15b004ad35ab1#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.694 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.695 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.695 189489 DEBUG nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284#033[00m
Nov 29 15:56:48 compute-0 nova_compute[189485]: 2025-11-29 15:56:48.695 189489 INFO nova.virt.libvirt.imagecache [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66#033[00m
Nov 29 15:56:49 compute-0 nova_compute[189485]: 2025-11-29 15:56:49.010 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:50 compute-0 nova_compute[189485]: 2025-11-29 15:56:50.840 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:54 compute-0 nova_compute[189485]: 2025-11-29 15:56:54.015 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:55 compute-0 nova_compute[189485]: 2025-11-29 15:56:55.842 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:58 compute-0 podman[255461]: 2025-11-29 15:56:58.678868356 +0000 UTC m=+0.120259315 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:56:59 compute-0 nova_compute[189485]: 2025-11-29 15:56:59.021 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:56:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:56:59.214 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:56:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:56:59.215 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:56:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:56:59.215 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:56:59 compute-0 podman[203677]: time="2025-11-29T15:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:56:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:56:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 29 15:57:00 compute-0 nova_compute[189485]: 2025-11-29 15:57:00.844 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: ERROR   15:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: ERROR   15:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: ERROR   15:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: ERROR   15:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:57:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:57:04 compute-0 nova_compute[189485]: 2025-11-29 15:57:04.027 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:05 compute-0 nova_compute[189485]: 2025-11-29 15:57:05.847 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:09 compute-0 nova_compute[189485]: 2025-11-29 15:57:09.033 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:09 compute-0 podman[255483]: 2025-11-29 15:57:09.672780428 +0000 UTC m=+0.115604080 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Nov 29 15:57:10 compute-0 nova_compute[189485]: 2025-11-29 15:57:10.849 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:11 compute-0 podman[255507]: 2025-11-29 15:57:11.668614793 +0000 UTC m=+0.099230800 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, name=ubi9-minimal, release=1755695350, config_id=edpm, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Nov 29 15:57:11 compute-0 podman[255503]: 2025-11-29 15:57:11.67185822 +0000 UTC m=+0.110971465 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., name=ubi9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 15:57:11 compute-0 podman[255505]: 2025-11-29 15:57:11.677401509 +0000 UTC m=+0.103360261 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 15:57:11 compute-0 podman[255504]: 2025-11-29 15:57:11.682903567 +0000 UTC m=+0.114411718 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 15:57:11 compute-0 podman[255506]: 2025-11-29 15:57:11.706549183 +0000 UTC m=+0.140279965 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 15:57:13 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Nov 29 15:57:14 compute-0 nova_compute[189485]: 2025-11-29 15:57:14.038 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:14 compute-0 podman[255600]: 2025-11-29 15:57:14.848215742 +0000 UTC m=+0.134640072 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Nov 29 15:57:15 compute-0 nova_compute[189485]: 2025-11-29 15:57:15.852 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:16 compute-0 podman[255620]: 2025-11-29 15:57:16.706405895 +0000 UTC m=+0.145171635 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 15:57:19 compute-0 nova_compute[189485]: 2025-11-29 15:57:19.040 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:19 compute-0 nova_compute[189485]: 2025-11-29 15:57:19.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:19 compute-0 nova_compute[189485]: 2025-11-29 15:57:19.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 15:57:19 compute-0 nova_compute[189485]: 2025-11-29 15:57:19.547 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 15:57:20 compute-0 nova_compute[189485]: 2025-11-29 15:57:20.853 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:21 compute-0 nova_compute[189485]: 2025-11-29 15:57:21.548 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:24 compute-0 nova_compute[189485]: 2025-11-29 15:57:24.043 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.514 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.515 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.516 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.550 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.551 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.551 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.552 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.656 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.722 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.723 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.799 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.806 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.855 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.863 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.864 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:57:25 compute-0 nova_compute[189485]: 2025-11-29 15:57:25.921 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.324 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.326 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4961MB free_disk=72.24899673461914GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.326 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.327 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.558 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.559 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.560 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.560 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.642 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.711 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.711 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.732 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.757 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.830 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.850 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.851 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:57:26 compute-0 nova_compute[189485]: 2025-11-29 15:57:26.852 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.525s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:57:27 compute-0 nova_compute[189485]: 2025-11-29 15:57:27.820 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:28 compute-0 nova_compute[189485]: 2025-11-29 15:57:28.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:29 compute-0 nova_compute[189485]: 2025-11-29 15:57:29.046 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:29 compute-0 nova_compute[189485]: 2025-11-29 15:57:29.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:29 compute-0 podman[255657]: 2025-11-29 15:57:29.700078357 +0000 UTC m=+0.130107191 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:57:29 compute-0 podman[203677]: time="2025-11-29T15:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:57:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:57:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4793 "" "Go-http-client/1.1"
Nov 29 15:57:30 compute-0 nova_compute[189485]: 2025-11-29 15:57:30.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:30 compute-0 nova_compute[189485]: 2025-11-29 15:57:30.858 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: ERROR   15:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: ERROR   15:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: ERROR   15:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: ERROR   15:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:57:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:57:32 compute-0 nova_compute[189485]: 2025-11-29 15:57:32.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:32 compute-0 nova_compute[189485]: 2025-11-29 15:57:32.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:57:34 compute-0 nova_compute[189485]: 2025-11-29 15:57:34.050 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:35 compute-0 nova_compute[189485]: 2025-11-29 15:57:35.863 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:39 compute-0 nova_compute[189485]: 2025-11-29 15:57:39.053 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:40 compute-0 podman[255681]: 2025-11-29 15:57:40.684499454 +0000 UTC m=+0.122111225 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 15:57:40 compute-0 nova_compute[189485]: 2025-11-29 15:57:40.867 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:41 compute-0 nova_compute[189485]: 2025-11-29 15:57:41.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:42 compute-0 podman[255701]: 2025-11-29 15:57:42.661362748 +0000 UTC m=+0.091319347 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125)
Nov 29 15:57:42 compute-0 podman[255707]: 2025-11-29 15:57:42.676223038 +0000 UTC m=+0.098287265 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 29 15:57:42 compute-0 podman[255700]: 2025-11-29 15:57:42.688639281 +0000 UTC m=+0.117530182 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 15:57:42 compute-0 podman[255699]: 2025-11-29 15:57:42.686700849 +0000 UTC m=+0.127971572 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, com.redhat.component=ubi9-container, vcs-type=git, version=9.4, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Nov 29 15:57:42 compute-0 podman[255702]: 2025-11-29 15:57:42.723831178 +0000 UTC m=+0.154444805 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 15:57:43 compute-0 nova_compute[189485]: 2025-11-29 15:57:43.497 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:44 compute-0 nova_compute[189485]: 2025-11-29 15:57:44.062 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:45 compute-0 podman[255794]: 2025-11-29 15:57:45.718382131 +0000 UTC m=+0.158476623 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, config_id=multipathd, org.label-schema.license=GPLv2)
Nov 29 15:57:45 compute-0 nova_compute[189485]: 2025-11-29 15:57:45.872 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:47 compute-0 podman[255814]: 2025-11-29 15:57:47.637849131 +0000 UTC m=+0.087851993 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:57:49 compute-0 nova_compute[189485]: 2025-11-29 15:57:49.067 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:50 compute-0 nova_compute[189485]: 2025-11-29 15:57:50.879 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:54 compute-0 nova_compute[189485]: 2025-11-29 15:57:54.071 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:55 compute-0 nova_compute[189485]: 2025-11-29 15:57:55.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:55 compute-0 nova_compute[189485]: 2025-11-29 15:57:55.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 15:57:55 compute-0 nova_compute[189485]: 2025-11-29 15:57:55.882 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.250 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.273 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid 2c879d1e-7499-4665-8880-438b30ff9d86 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.274 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Triggering sync for uuid a1c56ffa-6d1c-408c-8667-517745513fd0 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.274 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.274 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "2c879d1e-7499-4665-8880-438b30ff9d86" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.275 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.275 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.319 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.044s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:57:57 compute-0 nova_compute[189485]: 2025-11-29 15:57:57.320 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "2c879d1e-7499-4665-8880-438b30ff9d86" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.046s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:57:59 compute-0 nova_compute[189485]: 2025-11-29 15:57:59.077 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:57:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:57:59.215 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:57:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:57:59.215 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:57:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:57:59.216 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:57:59 compute-0 podman[203677]: time="2025-11-29T15:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:57:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:57:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Nov 29 15:58:00 compute-0 podman[255837]: 2025-11-29 15:58:00.618222466 +0000 UTC m=+0.071744420 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 15:58:00 compute-0 nova_compute[189485]: 2025-11-29 15:58:00.884 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.063 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.063 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.063 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.064 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.074 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.079 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'name': 'te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.079 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.080 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T15:58:01.080098) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.087 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.092 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.093 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T15:58:01.093399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.094 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.094 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.094 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.094 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T15:58:01.094889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.128 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: 42.37109375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.159 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/memory.usage volume: 43.8125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.160 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.161 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.161 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.162 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.162 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T15:58:01.160507) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T15:58:01.162138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.163 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.164 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T15:58:01.163773) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T15:58:01.165118) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.238 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.239 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.310 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.311 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.312 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.313 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.314 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.314 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.316 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T15:58:01.313321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T15:58:01.316137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.337 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.338 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.362 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.363 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.364 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.365 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.365 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 333970000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T15:58:01.365278) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.366 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/cpu volume: 189900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.367 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.367 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.367 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.368 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.368 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.368 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 569535603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.369 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 64248485 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.369 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 639094886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.370 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T15:58:01.368220) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.370 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 59124615 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.371 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.371 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.371 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.372 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.372 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.372 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.372 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.372 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.373 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.373 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.374 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.375 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.375 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.375 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.375 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.376 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.376 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.376 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.377 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.377 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.378 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.379 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T15:58:01.372637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.379 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T15:58:01.376336) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.379 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.380 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 73048064 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T15:58:01.380281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.381 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.381 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.382 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.382 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.383 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.383 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.384 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 8791762312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.384 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.385 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T15:58:01.383864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.385 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 3731344942 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.385 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.386 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.386 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.387 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.388 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T15:58:01.387719) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.388 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.389 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 315 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.390 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.390 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.390 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.391 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.392 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.392 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T15:58:01.389627) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.392 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.393 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.393 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.393 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.393 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.395 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T15:58:01.392410) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.396 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.396 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.396 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T15:58:01.394072) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.398 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T15:58:01.395317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.398 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.399 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.399 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T15:58:01.397953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.399 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.400 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T15:58:01.399852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.400 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.400 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.400 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.401 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.401 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.401 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.401 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.401 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.402 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.402 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.402 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.402 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.403 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.403 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T15:58:01.401423) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.403 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.404 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.405 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.406 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.407 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T15:58:01.403222) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.407 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.408 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.409 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.410 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 15:58:01.411 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: ERROR   15:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: ERROR   15:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: ERROR   15:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: ERROR   15:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:58:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:58:04 compute-0 nova_compute[189485]: 2025-11-29 15:58:04.082 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:05 compute-0 nova_compute[189485]: 2025-11-29 15:58:05.887 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:09 compute-0 nova_compute[189485]: 2025-11-29 15:58:09.087 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:10 compute-0 nova_compute[189485]: 2025-11-29 15:58:10.890 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:11 compute-0 podman[255879]: 2025-11-29 15:58:11.682775977 +0000 UTC m=+0.122690690 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Nov 29 15:58:13 compute-0 podman[255899]: 2025-11-29 15:58:13.687550312 +0000 UTC m=+0.123804730 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., architecture=x86_64, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=)
Nov 29 15:58:13 compute-0 podman[255900]: 2025-11-29 15:58:13.688388425 +0000 UTC m=+0.117658165 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 15:58:13 compute-0 podman[255909]: 2025-11-29 15:58:13.702139955 +0000 UTC m=+0.105189880 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.buildah.version=1.33.7, version=9.6, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41)
Nov 29 15:58:13 compute-0 podman[255901]: 2025-11-29 15:58:13.707324263 +0000 UTC m=+0.126006679 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 15:58:13 compute-0 podman[255902]: 2025-11-29 15:58:13.726481629 +0000 UTC m=+0.146618444 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Nov 29 15:58:14 compute-0 nova_compute[189485]: 2025-11-29 15:58:14.089 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:15 compute-0 nova_compute[189485]: 2025-11-29 15:58:15.892 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:16 compute-0 podman[255994]: 2025-11-29 15:58:16.663355961 +0000 UTC m=+0.105167900 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, container_name=multipathd)
Nov 29 15:58:18 compute-0 podman[256014]: 2025-11-29 15:58:18.685353719 +0000 UTC m=+0.125902597 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 15:58:19 compute-0 nova_compute[189485]: 2025-11-29 15:58:19.093 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:20 compute-0 nova_compute[189485]: 2025-11-29 15:58:20.895 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:22 compute-0 nova_compute[189485]: 2025-11-29 15:58:22.509 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:24 compute-0 nova_compute[189485]: 2025-11-29 15:58:24.097 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:25 compute-0 nova_compute[189485]: 2025-11-29 15:58:25.898 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:26 compute-0 nova_compute[189485]: 2025-11-29 15:58:26.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.909 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.909 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.910 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:58:27 compute-0 nova_compute[189485]: 2025-11-29 15:58:27.911 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.102 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.560 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.583 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.583 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.583 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.584 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.584 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.608 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.609 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.609 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.610 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.716 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:58:29 compute-0 podman[203677]: time="2025-11-29T15:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:58:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:58:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.816 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.818 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.921 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.931 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.988 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:58:29 compute-0 nova_compute[189485]: 2025-11-29 15:58:29.990 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.052 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.382 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.384 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4901MB free_disk=72.24893951416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.384 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.385 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.466 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.467 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.467 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.467 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.547 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.565 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.567 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.568 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.184s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:58:30 compute-0 nova_compute[189485]: 2025-11-29 15:58:30.899 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: ERROR   15:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: ERROR   15:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: ERROR   15:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: ERROR   15:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:58:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:58:31 compute-0 podman[256051]: 2025-11-29 15:58:31.629784497 +0000 UTC m=+0.085993964 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 15:58:32 compute-0 nova_compute[189485]: 2025-11-29 15:58:32.565 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:32 compute-0 nova_compute[189485]: 2025-11-29 15:58:32.565 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:34 compute-0 nova_compute[189485]: 2025-11-29 15:58:34.106 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:34 compute-0 nova_compute[189485]: 2025-11-29 15:58:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:58:34 compute-0 nova_compute[189485]: 2025-11-29 15:58:34.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:58:35 compute-0 nova_compute[189485]: 2025-11-29 15:58:35.901 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:39 compute-0 nova_compute[189485]: 2025-11-29 15:58:39.111 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:40 compute-0 nova_compute[189485]: 2025-11-29 15:58:40.906 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:42 compute-0 podman[256074]: 2025-11-29 15:58:42.661613368 +0000 UTC m=+0.115040164 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, org.label-schema.vendor=CentOS)
Nov 29 15:58:44 compute-0 nova_compute[189485]: 2025-11-29 15:58:44.116 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:44 compute-0 podman[256103]: 2025-11-29 15:58:44.651977695 +0000 UTC m=+0.080670630 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm)
Nov 29 15:58:44 compute-0 podman[256093]: 2025-11-29 15:58:44.661187553 +0000 UTC m=+0.106425713 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4)
Nov 29 15:58:44 compute-0 podman[256095]: 2025-11-29 15:58:44.672368764 +0000 UTC m=+0.111777457 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:58:44 compute-0 podman[256094]: 2025-11-29 15:58:44.675621211 +0000 UTC m=+0.115884287 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 15:58:44 compute-0 podman[256096]: 2025-11-29 15:58:44.718383931 +0000 UTC m=+0.148354320 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Nov 29 15:58:45 compute-0 nova_compute[189485]: 2025-11-29 15:58:45.909 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:47 compute-0 podman[256189]: 2025-11-29 15:58:47.708231977 +0000 UTC m=+0.142318377 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 15:58:49 compute-0 nova_compute[189485]: 2025-11-29 15:58:49.121 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:50 compute-0 podman[256209]: 2025-11-29 15:58:50.009907026 +0000 UTC m=+0.119413492 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:58:50 compute-0 nova_compute[189485]: 2025-11-29 15:58:50.912 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:54 compute-0 nova_compute[189485]: 2025-11-29 15:58:54.125 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:55 compute-0 nova_compute[189485]: 2025-11-29 15:58:55.915 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:59 compute-0 nova_compute[189485]: 2025-11-29 15:58:59.131 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:58:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:58:59.216 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:58:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:58:59.216 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:58:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:58:59.217 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:58:59 compute-0 podman[203677]: time="2025-11-29T15:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:58:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:58:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 29 15:59:00 compute-0 nova_compute[189485]: 2025-11-29 15:59:00.918 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: ERROR   15:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: ERROR   15:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: ERROR   15:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: ERROR   15:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:59:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:59:02 compute-0 podman[256233]: 2025-11-29 15:59:02.685294998 +0000 UTC m=+0.119881205 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 15:59:04 compute-0 nova_compute[189485]: 2025-11-29 15:59:04.134 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:05 compute-0 nova_compute[189485]: 2025-11-29 15:59:05.921 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:09 compute-0 nova_compute[189485]: 2025-11-29 15:59:09.138 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:10 compute-0 nova_compute[189485]: 2025-11-29 15:59:10.923 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:13 compute-0 podman[256257]: 2025-11-29 15:59:13.672197481 +0000 UTC m=+0.117285195 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, managed_by=edpm_ansible, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 15:59:14 compute-0 nova_compute[189485]: 2025-11-29 15:59:14.143 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:14 compute-0 podman[256277]: 2025-11-29 15:59:14.793023575 +0000 UTC m=+0.098081960 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.expose-services=, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 15:59:14 compute-0 podman[256279]: 2025-11-29 15:59:14.811294216 +0000 UTC m=+0.094667288 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Nov 29 15:59:14 compute-0 podman[256278]: 2025-11-29 15:59:14.8229747 +0000 UTC m=+0.123143303 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, name=ubi9, vcs-type=git, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc.)
Nov 29 15:59:14 compute-0 podman[256290]: 2025-11-29 15:59:14.825432846 +0000 UTC m=+0.092131519 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 15:59:14 compute-0 podman[256342]: 2025-11-29 15:59:14.937592222 +0000 UTC m=+0.122176647 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 15:59:15 compute-0 nova_compute[189485]: 2025-11-29 15:59:15.930 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:18 compute-0 podman[256377]: 2025-11-29 15:59:18.680049899 +0000 UTC m=+0.120034219 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Nov 29 15:59:19 compute-0 nova_compute[189485]: 2025-11-29 15:59:19.146 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:20 compute-0 podman[256397]: 2025-11-29 15:59:20.654566561 +0000 UTC m=+0.095481439 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:59:20 compute-0 nova_compute[189485]: 2025-11-29 15:59:20.929 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:24 compute-0 nova_compute[189485]: 2025-11-29 15:59:24.150 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:24 compute-0 nova_compute[189485]: 2025-11-29 15:59:24.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:25 compute-0 nova_compute[189485]: 2025-11-29 15:59:25.932 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.522 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.523 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.640 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.709 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.710 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.780 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.792 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.893 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.896 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 15:59:27 compute-0 nova_compute[189485]: 2025-11-29 15:59:27.993 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.459 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.461 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4886MB free_disk=72.24893951416016GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.461 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.461 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.567 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.567 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.568 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.568 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.813 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.845 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.848 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 15:59:28 compute-0 nova_compute[189485]: 2025-11-29 15:59:28.849 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.388s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:59:29 compute-0 nova_compute[189485]: 2025-11-29 15:59:29.155 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:29 compute-0 podman[203677]: time="2025-11-29T15:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:59:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:59:29 compute-0 podman[203677]: @ - - [29/Nov/2025:15:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 29 15:59:29 compute-0 nova_compute[189485]: 2025-11-29 15:59:29.850 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:29 compute-0 nova_compute[189485]: 2025-11-29 15:59:29.850 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 15:59:30 compute-0 nova_compute[189485]: 2025-11-29 15:59:30.174 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 15:59:30 compute-0 nova_compute[189485]: 2025-11-29 15:59:30.175 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 15:59:30 compute-0 nova_compute[189485]: 2025-11-29 15:59:30.175 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 15:59:30 compute-0 nova_compute[189485]: 2025-11-29 15:59:30.933 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: ERROR   15:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: ERROR   15:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: ERROR   15:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: ERROR   15:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 15:59:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 15:59:32 compute-0 nova_compute[189485]: 2025-11-29 15:59:32.375 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 15:59:32 compute-0 nova_compute[189485]: 2025-11-29 15:59:32.394 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 15:59:32 compute-0 nova_compute[189485]: 2025-11-29 15:59:32.395 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 15:59:32 compute-0 nova_compute[189485]: 2025-11-29 15:59:32.396 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:32 compute-0 nova_compute[189485]: 2025-11-29 15:59:32.397 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:33 compute-0 nova_compute[189485]: 2025-11-29 15:59:33.026 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:33 compute-0 nova_compute[189485]: 2025-11-29 15:59:33.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:33 compute-0 podman[256431]: 2025-11-29 15:59:33.668311201 +0000 UTC m=+0.109503366 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 15:59:34 compute-0 nova_compute[189485]: 2025-11-29 15:59:34.158 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:34 compute-0 nova_compute[189485]: 2025-11-29 15:59:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:34 compute-0 nova_compute[189485]: 2025-11-29 15:59:34.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 15:59:35 compute-0 nova_compute[189485]: 2025-11-29 15:59:35.937 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:39 compute-0 nova_compute[189485]: 2025-11-29 15:59:39.161 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:40 compute-0 nova_compute[189485]: 2025-11-29 15:59:40.940 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:44 compute-0 nova_compute[189485]: 2025-11-29 15:59:44.163 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:44 compute-0 nova_compute[189485]: 2025-11-29 15:59:44.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 15:59:44 compute-0 podman[256453]: 2025-11-29 15:59:44.683875086 +0000 UTC m=+0.135236509 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 29 15:59:45 compute-0 podman[256473]: 2025-11-29 15:59:45.663049609 +0000 UTC m=+0.104110351 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 15:59:45 compute-0 podman[256472]: 2025-11-29 15:59:45.668189817 +0000 UTC m=+0.115077625 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, container_name=kepler, distribution-scope=public, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, config_id=edpm, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, managed_by=edpm_ansible)
Nov 29 15:59:45 compute-0 podman[256474]: 2025-11-29 15:59:45.689472749 +0000 UTC m=+0.116330449 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 15:59:45 compute-0 podman[256475]: 2025-11-29 15:59:45.705376087 +0000 UTC m=+0.130949012 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ovn_controller)
Nov 29 15:59:45 compute-0 podman[256482]: 2025-11-29 15:59:45.721505481 +0000 UTC m=+0.151120025 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Nov 29 15:59:45 compute-0 nova_compute[189485]: 2025-11-29 15:59:45.942 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:49 compute-0 nova_compute[189485]: 2025-11-29 15:59:49.165 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:49 compute-0 podman[256571]: 2025-11-29 15:59:49.877198961 +0000 UTC m=+0.313902043 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd)
Nov 29 15:59:50 compute-0 nova_compute[189485]: 2025-11-29 15:59:50.944 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:51 compute-0 podman[256591]: 2025-11-29 15:59:51.675528183 +0000 UTC m=+0.101957232 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 15:59:54 compute-0 nova_compute[189485]: 2025-11-29 15:59:54.167 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:55 compute-0 nova_compute[189485]: 2025-11-29 15:59:55.948 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:59 compute-0 nova_compute[189485]: 2025-11-29 15:59:59.171 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 15:59:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:59:59.218 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 15:59:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:59:59.219 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 15:59:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 15:59:59.220 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 15:59:59 compute-0 podman[203677]: time="2025-11-29T15:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 15:59:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 15:59:59 compute-0 podman[203677]: @ - - [29/Nov/2025:15:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Nov 29 16:00:00 compute-0 nova_compute[189485]: 2025-11-29 16:00:00.951 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.064 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.064 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.074 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c351310>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.078 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'name': 'te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.078 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.079 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.080 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T16:00:01.079200) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.086 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.091 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.091 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.093 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.094 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T16:00:01.092330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.095 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T16:00:01.093384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.125 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: 42.37890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.160 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/memory.usage volume: 43.8125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.160 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.161 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.162 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T16:00:01.161382) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T16:00:01.162852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T16:00:01.163951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.169 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T16:00:01.164959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.208 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.208 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.272 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.273 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.273 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.273 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.273 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.273 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.274 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.275 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.275 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T16:00:01.274086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.276 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T16:00:01.275066) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.293 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.294 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.308 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.308 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.309 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.309 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.309 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.310 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 335780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.310 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/cpu volume: 309600000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 569535603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T16:00:01.310059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.311 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 64248485 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.312 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 639094886 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T16:00:01.311587) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.312 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 59124615 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.312 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.313 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.314 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.314 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T16:00:01.313718) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.314 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.314 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.315 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.316 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.316 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.316 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T16:00:01.315497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.317 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.318 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 72847360 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.318 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.318 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 8838861137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 3731344942 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T16:00:01.317468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T16:00:01.319253) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.320 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.321 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.321 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T16:00:01.321127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.322 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.323 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 312 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.323 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.323 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.323 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.325 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.326 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.326 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.326 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.326 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.328 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.328 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.328 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.327 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T16:00:01.322439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.329 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.330 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.331 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T16:00:01.324145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T16:00:01.325348) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T16:00:01.326329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.337 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T16:00:01.328104) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T16:00:01.329258) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.338 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T16:00:01.330173) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:00:01.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T16:00:01.331293) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: ERROR   16:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: ERROR   16:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: ERROR   16:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: ERROR   16:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:00:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:00:04 compute-0 nova_compute[189485]: 2025-11-29 16:00:04.174 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:04 compute-0 podman[256615]: 2025-11-29 16:00:04.674133008 +0000 UTC m=+0.116147925 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 16:00:05 compute-0 nova_compute[189485]: 2025-11-29 16:00:05.954 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:09 compute-0 nova_compute[189485]: 2025-11-29 16:00:09.177 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:10 compute-0 nova_compute[189485]: 2025-11-29 16:00:10.958 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:14 compute-0 nova_compute[189485]: 2025-11-29 16:00:14.180 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:15 compute-0 podman[256639]: 2025-11-29 16:00:15.701833618 +0000 UTC m=+0.139905234 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 16:00:15 compute-0 podman[256661]: 2025-11-29 16:00:15.84693226 +0000 UTC m=+0.090774082 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:00:15 compute-0 podman[256668]: 2025-11-29 16:00:15.866412834 +0000 UTC m=+0.091803030 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Nov 29 16:00:15 compute-0 podman[256660]: 2025-11-29 16:00:15.868603653 +0000 UTC m=+0.116251238 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, distribution-scope=public, maintainer=Red Hat, Inc., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_id=edpm, release=1214.1726694543, version=9.4, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 16:00:15 compute-0 podman[256662]: 2025-11-29 16:00:15.896256237 +0000 UTC m=+0.133388988 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true)
Nov 29 16:00:15 compute-0 podman[256663]: 2025-11-29 16:00:15.907278923 +0000 UTC m=+0.130687975 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:00:15 compute-0 nova_compute[189485]: 2025-11-29 16:00:15.959 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:19 compute-0 nova_compute[189485]: 2025-11-29 16:00:19.183 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:20 compute-0 podman[256757]: 2025-11-29 16:00:20.672712022 +0000 UTC m=+0.116954307 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 16:00:20 compute-0 nova_compute[189485]: 2025-11-29 16:00:20.963 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:22 compute-0 podman[256777]: 2025-11-29 16:00:22.672156213 +0000 UTC m=+0.106052633 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 16:00:24 compute-0 nova_compute[189485]: 2025-11-29 16:00:24.186 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:25 compute-0 nova_compute[189485]: 2025-11-29 16:00:25.967 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:26 compute-0 nova_compute[189485]: 2025-11-29 16:00:26.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:27 compute-0 nova_compute[189485]: 2025-11-29 16:00:27.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:28 compute-0 nova_compute[189485]: 2025-11-29 16:00:28.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:28 compute-0 nova_compute[189485]: 2025-11-29 16:00:28.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:00:28 compute-0 nova_compute[189485]: 2025-11-29 16:00:28.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:00:29 compute-0 nova_compute[189485]: 2025-11-29 16:00:29.069 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 16:00:29 compute-0 nova_compute[189485]: 2025-11-29 16:00:29.069 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 16:00:29 compute-0 nova_compute[189485]: 2025-11-29 16:00:29.070 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 16:00:29 compute-0 nova_compute[189485]: 2025-11-29 16:00:29.070 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 16:00:29 compute-0 nova_compute[189485]: 2025-11-29 16:00:29.189 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:29 compute-0 podman[203677]: time="2025-11-29T16:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:00:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:00:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.631 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.646 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.647 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.648 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.648 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.695 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.695 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.696 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.697 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.790 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.892 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.894 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.971 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:30 compute-0 nova_compute[189485]: 2025-11-29 16:00:30.994 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.007 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.102 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.104 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.201 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: ERROR   16:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: ERROR   16:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: ERROR   16:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: ERROR   16:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:00:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.714 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.715 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4868MB free_disk=72.24903869628906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.716 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.716 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.802 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.802 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.803 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.803 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.863 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.881 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.882 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:00:31 compute-0 nova_compute[189485]: 2025-11-29 16:00:31.882 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.166s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:00:32 compute-0 nova_compute[189485]: 2025-11-29 16:00:32.717 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:32 compute-0 nova_compute[189485]: 2025-11-29 16:00:32.718 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:34 compute-0 nova_compute[189485]: 2025-11-29 16:00:34.193 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:34 compute-0 nova_compute[189485]: 2025-11-29 16:00:34.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:35 compute-0 nova_compute[189485]: 2025-11-29 16:00:35.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:00:35 compute-0 nova_compute[189485]: 2025-11-29 16:00:35.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:00:35 compute-0 podman[256815]: 2025-11-29 16:00:35.684586719 +0000 UTC m=+0.117541782 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 16:00:35 compute-0 nova_compute[189485]: 2025-11-29 16:00:35.975 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:39 compute-0 nova_compute[189485]: 2025-11-29 16:00:39.197 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:40 compute-0 nova_compute[189485]: 2025-11-29 16:00:40.978 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:44 compute-0 nova_compute[189485]: 2025-11-29 16:00:44.201 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:45 compute-0 nova_compute[189485]: 2025-11-29 16:00:45.982 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:46 compute-0 podman[256839]: 2025-11-29 16:00:46.691359949 +0000 UTC m=+0.129166674 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, io.openshift.expose-services=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, distribution-scope=public, build-date=2024-09-18T21:23:30)
Nov 29 16:00:46 compute-0 podman[256841]: 2025-11-29 16:00:46.702733656 +0000 UTC m=+0.118967870 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:00:46 compute-0 podman[256840]: 2025-11-29 16:00:46.7088316 +0000 UTC m=+0.134071847 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 16:00:46 compute-0 podman[256849]: 2025-11-29 16:00:46.726630579 +0000 UTC m=+0.145163936 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 16:00:46 compute-0 podman[256853]: 2025-11-29 16:00:46.72706094 +0000 UTC m=+0.128889367 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, version=9.6, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc.)
Nov 29 16:00:46 compute-0 podman[256842]: 2025-11-29 16:00:46.738933549 +0000 UTC m=+0.150854487 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:00:49 compute-0 nova_compute[189485]: 2025-11-29 16:00:49.204 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:50 compute-0 nova_compute[189485]: 2025-11-29 16:00:50.985 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:51 compute-0 podman[256954]: 2025-11-29 16:00:51.639913186 +0000 UTC m=+0.090211617 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 16:00:53 compute-0 podman[256974]: 2025-11-29 16:00:53.68862874 +0000 UTC m=+0.118092346 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:00:54 compute-0 nova_compute[189485]: 2025-11-29 16:00:54.209 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:55 compute-0 nova_compute[189485]: 2025-11-29 16:00:55.989 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:59 compute-0 nova_compute[189485]: 2025-11-29 16:00:59.217 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:00:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:00:59.219 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:00:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:00:59.223 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:00:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:00:59.224 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:00:59 compute-0 podman[203677]: time="2025-11-29T16:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:00:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:00:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 29 16:01:00 compute-0 nova_compute[189485]: 2025-11-29 16:01:00.993 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: ERROR   16:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: ERROR   16:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: ERROR   16:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: ERROR   16:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:01:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:01:04 compute-0 nova_compute[189485]: 2025-11-29 16:01:04.227 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:05 compute-0 nova_compute[189485]: 2025-11-29 16:01:05.995 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:06 compute-0 podman[257010]: 2025-11-29 16:01:06.666239879 +0000 UTC m=+0.104947484 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:01:09 compute-0 nova_compute[189485]: 2025-11-29 16:01:09.234 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:11 compute-0 nova_compute[189485]: 2025-11-29 16:01:11.000 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:14 compute-0 nova_compute[189485]: 2025-11-29 16:01:14.239 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:16 compute-0 nova_compute[189485]: 2025-11-29 16:01:16.004 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:17 compute-0 podman[257034]: 2025-11-29 16:01:17.6859757 +0000 UTC m=+0.120518461 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 16:01:17 compute-0 podman[257035]: 2025-11-29 16:01:17.7041873 +0000 UTC m=+0.125475704 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:01:17 compute-0 podman[257033]: 2025-11-29 16:01:17.704448097 +0000 UTC m=+0.136513532 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, config_id=edpm, io.openshift.tags=base rhel9, release=1214.1726694543, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, distribution-scope=public)
Nov 29 16:01:17 compute-0 podman[257049]: 2025-11-29 16:01:17.720475658 +0000 UTC m=+0.126880483 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, version=9.6, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=edpm, release=1755695350)
Nov 29 16:01:17 compute-0 podman[257036]: 2025-11-29 16:01:17.724865556 +0000 UTC m=+0.140405996 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 16:01:17 compute-0 podman[257046]: 2025-11-29 16:01:17.749147139 +0000 UTC m=+0.155545244 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Nov 29 16:01:19 compute-0 nova_compute[189485]: 2025-11-29 16:01:19.243 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:21 compute-0 nova_compute[189485]: 2025-11-29 16:01:21.006 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:22 compute-0 podman[257151]: 2025-11-29 16:01:22.70820227 +0000 UTC m=+0.137382185 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:01:24 compute-0 nova_compute[189485]: 2025-11-29 16:01:24.247 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:24 compute-0 podman[257172]: 2025-11-29 16:01:24.701616708 +0000 UTC m=+0.137891500 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:01:26 compute-0 nova_compute[189485]: 2025-11-29 16:01:26.011 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.486 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.541 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.542 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.543 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.544 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.660 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.758 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.761 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.822 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.831 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.924 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:01:28 compute-0 nova_compute[189485]: 2025-11-29 16:01:28.925 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.022 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.250 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.572 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.573 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4880MB free_disk=72.24903869628906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.573 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.573 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.678 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.679 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.679 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.679 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:01:29 compute-0 podman[203677]: time="2025-11-29T16:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:01:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.756 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:01:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.784 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.786 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:01:29 compute-0 nova_compute[189485]: 2025-11-29 16:01:29.787 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:01:31 compute-0 nova_compute[189485]: 2025-11-29 16:01:31.012 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: ERROR   16:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: ERROR   16:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: ERROR   16:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: ERROR   16:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:01:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:01:31 compute-0 nova_compute[189485]: 2025-11-29 16:01:31.786 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:31 compute-0 nova_compute[189485]: 2025-11-29 16:01:31.786 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:01:32 compute-0 nova_compute[189485]: 2025-11-29 16:01:32.135 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 16:01:32 compute-0 nova_compute[189485]: 2025-11-29 16:01:32.135 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 16:01:32 compute-0 nova_compute[189485]: 2025-11-29 16:01:32.135 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 16:01:33 compute-0 nova_compute[189485]: 2025-11-29 16:01:33.752 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:01:33 compute-0 nova_compute[189485]: 2025-11-29 16:01:33.770 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 16:01:33 compute-0 nova_compute[189485]: 2025-11-29 16:01:33.771 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 16:01:33 compute-0 nova_compute[189485]: 2025-11-29 16:01:33.772 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:33 compute-0 nova_compute[189485]: 2025-11-29 16:01:33.773 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:34 compute-0 nova_compute[189485]: 2025-11-29 16:01:34.254 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:34 compute-0 nova_compute[189485]: 2025-11-29 16:01:34.466 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:34 compute-0 nova_compute[189485]: 2025-11-29 16:01:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:36 compute-0 nova_compute[189485]: 2025-11-29 16:01:36.015 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:36 compute-0 nova_compute[189485]: 2025-11-29 16:01:36.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:36 compute-0 nova_compute[189485]: 2025-11-29 16:01:36.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:01:37 compute-0 podman[257208]: 2025-11-29 16:01:37.656829373 +0000 UTC m=+0.102055566 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 16:01:39 compute-0 nova_compute[189485]: 2025-11-29 16:01:39.259 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:41 compute-0 nova_compute[189485]: 2025-11-29 16:01:41.018 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:44 compute-0 nova_compute[189485]: 2025-11-29 16:01:44.263 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:46 compute-0 nova_compute[189485]: 2025-11-29 16:01:46.022 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:47 compute-0 nova_compute[189485]: 2025-11-29 16:01:47.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:01:48 compute-0 podman[257231]: 2025-11-29 16:01:48.691538691 +0000 UTC m=+0.125789884 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-type=git, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:01:48 compute-0 podman[257234]: 2025-11-29 16:01:48.709619258 +0000 UTC m=+0.126179125 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 16:01:48 compute-0 podman[257232]: 2025-11-29 16:01:48.712829334 +0000 UTC m=+0.140033577 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Nov 29 16:01:48 compute-0 podman[257233]: 2025-11-29 16:01:48.725032222 +0000 UTC m=+0.147809316 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi)
Nov 29 16:01:48 compute-0 podman[257256]: 2025-11-29 16:01:48.727393445 +0000 UTC m=+0.112314031 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter)
Nov 29 16:01:48 compute-0 podman[257239]: 2025-11-29 16:01:48.738474563 +0000 UTC m=+0.139573034 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Nov 29 16:01:49 compute-0 nova_compute[189485]: 2025-11-29 16:01:49.267 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:51 compute-0 nova_compute[189485]: 2025-11-29 16:01:51.025 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:53 compute-0 podman[257350]: 2025-11-29 16:01:53.617243601 +0000 UTC m=+0.074693660 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:01:54 compute-0 nova_compute[189485]: 2025-11-29 16:01:54.270 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:55 compute-0 podman[257368]: 2025-11-29 16:01:55.688218856 +0000 UTC m=+0.132103984 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 16:01:56 compute-0 nova_compute[189485]: 2025-11-29 16:01:56.028 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:01:59.221 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:01:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:01:59.222 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:01:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:01:59.223 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:01:59 compute-0 nova_compute[189485]: 2025-11-29 16:01:59.272 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:01:59 compute-0 podman[203677]: time="2025-11-29T16:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:01:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:01:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4800 "" "Go-http-client/1.1"
Nov 29 16:02:01 compute-0 nova_compute[189485]: 2025-11-29 16:02:01.031 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.064 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.065 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1fb0bf80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.075 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.081 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'name': 'te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.081 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.084 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T16:02:01.082563) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.089 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.096 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.097 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.098 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.099 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.099 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.099 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.101 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T16:02:01.099028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.102 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T16:02:01.101717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.142 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: 42.37890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.181 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/memory.usage volume: 42.4140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.182 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.183 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.183 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.185 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.185 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.186 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.186 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.187 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.188 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.189 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T16:02:01.183034) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.190 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.190 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T16:02:01.186373) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T16:02:01.189047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.192 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.192 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T16:02:01.192844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.264 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.265 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.336 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.337 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.338 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.338 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.339 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.339 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.339 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T16:02:01.339298) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.339 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.340 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.341 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.341 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.341 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T16:02:01.341856) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.368 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.369 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.389 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.390 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.391 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.392 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 337650000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.393 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/cpu volume: 334020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T16:02:01.392318) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.394 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.394 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.395 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 569535603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.396 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 64248485 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.396 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 667951938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.397 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 71545186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.398 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.398 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.399 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.399 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T16:02:01.395219) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.399 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.399 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.400 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.400 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.400 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.401 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.401 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T16:02:01.400090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.402 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.402 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.403 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.403 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.403 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.403 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.403 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.404 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.404 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.405 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T16:02:01.403858) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.405 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.405 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.406 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.406 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.406 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.407 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.407 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.407 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.407 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.408 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.408 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.409 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.409 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.410 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.410 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T16:02:01.407339) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.410 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.411 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 8838861137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.411 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.412 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 3824252546 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.412 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.413 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.413 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.413 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.414 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.414 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.414 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.415 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.415 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T16:02:01.411020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.416 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.416 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: ERROR   16:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: ERROR   16:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: ERROR   16:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.417 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.417 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.417 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.418 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.419 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.419 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.419 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.420 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.420 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.420 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.420 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.420 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.421 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.421 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.421 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.421 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T16:02:01.414001) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.421 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.422 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.422 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.422 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.422 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.423 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.423 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.423 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.423 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.423 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T16:02:01.416704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: ERROR   16:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:02:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.425 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.426 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.426 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.426 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.427 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.427 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.428 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T16:02:01.418735) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T16:02:01.420153) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.429 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.430 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T16:02:01.421379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.431 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T16:02:01.423376) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.432 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.433 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T16:02:01.424870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T16:02:01.425976) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:02:01.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T16:02:01.427342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:02:04 compute-0 nova_compute[189485]: 2025-11-29 16:02:04.276 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:06 compute-0 nova_compute[189485]: 2025-11-29 16:02:06.034 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:08 compute-0 podman[257396]: 2025-11-29 16:02:08.648383223 +0000 UTC m=+0.104102270 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:02:09 compute-0 nova_compute[189485]: 2025-11-29 16:02:09.278 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:11 compute-0 nova_compute[189485]: 2025-11-29 16:02:11.036 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:14 compute-0 nova_compute[189485]: 2025-11-29 16:02:14.283 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:16 compute-0 nova_compute[189485]: 2025-11-29 16:02:16.040 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:19 compute-0 nova_compute[189485]: 2025-11-29 16:02:19.287 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:19 compute-0 podman[257420]: 2025-11-29 16:02:19.676359665 +0000 UTC m=+0.107941304 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 16:02:19 compute-0 podman[257422]: 2025-11-29 16:02:19.681343039 +0000 UTC m=+0.095512699 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 29 16:02:19 compute-0 podman[257419]: 2025-11-29 16:02:19.691240615 +0000 UTC m=+0.127426468 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.openshift.tags=base rhel9, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Nov 29 16:02:19 compute-0 podman[257433]: 2025-11-29 16:02:19.707598255 +0000 UTC m=+0.122286710 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, name=ubi9-minimal, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc.)
Nov 29 16:02:19 compute-0 podman[257421]: 2025-11-29 16:02:19.729513224 +0000 UTC m=+0.143913540 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 16:02:19 compute-0 podman[257428]: 2025-11-29 16:02:19.75313551 +0000 UTC m=+0.151129466 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:02:21 compute-0 nova_compute[189485]: 2025-11-29 16:02:21.042 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:24 compute-0 nova_compute[189485]: 2025-11-29 16:02:24.291 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:24 compute-0 podman[257537]: 2025-11-29 16:02:24.625602286 +0000 UTC m=+0.080661510 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 16:02:26 compute-0 nova_compute[189485]: 2025-11-29 16:02:26.045 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:26 compute-0 podman[257556]: 2025-11-29 16:02:26.695978726 +0000 UTC m=+0.133912702 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 16:02:27 compute-0 nova_compute[189485]: 2025-11-29 16:02:27.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:27 compute-0 nova_compute[189485]: 2025-11-29 16:02:27.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 16:02:27 compute-0 nova_compute[189485]: 2025-11-29 16:02:27.513 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.513 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.543 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.544 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.544 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.545 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.654 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.744 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.745 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.846 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.853 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.929 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:02:28 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.931 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:28.999 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.296 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.398 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.399 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4882MB free_disk=72.24903869628906GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.400 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.400 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.665 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.666 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.667 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.668 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:02:29 compute-0 podman[203677]: time="2025-11-29T16:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:02:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.763 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 16:02:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.837 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.838 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.861 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.886 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.949 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.966 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.967 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:02:29 compute-0 nova_compute[189485]: 2025-11-29 16:02:29.968 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.567s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:02:31 compute-0 nova_compute[189485]: 2025-11-29 16:02:31.048 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: ERROR   16:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: ERROR   16:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: ERROR   16:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: ERROR   16:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:02:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:02:31 compute-0 nova_compute[189485]: 2025-11-29 16:02:31.939 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:31 compute-0 nova_compute[189485]: 2025-11-29 16:02:31.939 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:02:31 compute-0 nova_compute[189485]: 2025-11-29 16:02:31.940 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:02:32 compute-0 nova_compute[189485]: 2025-11-29 16:02:32.233 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 16:02:32 compute-0 nova_compute[189485]: 2025-11-29 16:02:32.234 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 16:02:32 compute-0 nova_compute[189485]: 2025-11-29 16:02:32.234 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 16:02:32 compute-0 nova_compute[189485]: 2025-11-29 16:02:32.235 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.942 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.962 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.963 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.964 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.964 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.965 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:33 compute-0 nova_compute[189485]: 2025-11-29 16:02:33.965 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:34 compute-0 nova_compute[189485]: 2025-11-29 16:02:34.299 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:34 compute-0 nova_compute[189485]: 2025-11-29 16:02:34.505 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:35 compute-0 nova_compute[189485]: 2025-11-29 16:02:35.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:36 compute-0 nova_compute[189485]: 2025-11-29 16:02:36.052 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:38 compute-0 nova_compute[189485]: 2025-11-29 16:02:38.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:38 compute-0 nova_compute[189485]: 2025-11-29 16:02:38.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:02:39 compute-0 nova_compute[189485]: 2025-11-29 16:02:39.304 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:39 compute-0 podman[257591]: 2025-11-29 16:02:39.6982971 +0000 UTC m=+0.126419181 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Nov 29 16:02:41 compute-0 nova_compute[189485]: 2025-11-29 16:02:41.056 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:44 compute-0 nova_compute[189485]: 2025-11-29 16:02:44.310 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:46 compute-0 nova_compute[189485]: 2025-11-29 16:02:46.060 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:49 compute-0 nova_compute[189485]: 2025-11-29 16:02:49.316 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:51 compute-0 nova_compute[189485]: 2025-11-29 16:02:51.063 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:51 compute-0 podman[257619]: 2025-11-29 16:02:51.096781745 +0000 UTC m=+0.094188304 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:02:51 compute-0 podman[257615]: 2025-11-29 16:02:51.098233314 +0000 UTC m=+0.107426699 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, release-0.7.12=, managed_by=edpm_ansible, distribution-scope=public, release=1214.1726694543, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9)
Nov 29 16:02:51 compute-0 podman[257630]: 2025-11-29 16:02:51.124350207 +0000 UTC m=+0.098251043 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_id=edpm, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, architecture=x86_64, release=1755695350)
Nov 29 16:02:51 compute-0 podman[257616]: 2025-11-29 16:02:51.125296113 +0000 UTC m=+0.127803559 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Nov 29 16:02:51 compute-0 podman[257617]: 2025-11-29 16:02:51.1330097 +0000 UTC m=+0.124214662 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 16:02:51 compute-0 podman[257624]: 2025-11-29 16:02:51.160315494 +0000 UTC m=+0.149088351 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125)
Nov 29 16:02:54 compute-0 nova_compute[189485]: 2025-11-29 16:02:54.318 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:54 compute-0 nova_compute[189485]: 2025-11-29 16:02:54.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:55 compute-0 nova_compute[189485]: 2025-11-29 16:02:55.497 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:02:55 compute-0 nova_compute[189485]: 2025-11-29 16:02:55.498 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 16:02:55 compute-0 podman[257732]: 2025-11-29 16:02:55.665586535 +0000 UTC m=+0.104748387 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 16:02:56 compute-0 nova_compute[189485]: 2025-11-29 16:02:56.066 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:57 compute-0 podman[257752]: 2025-11-29 16:02:57.623679893 +0000 UTC m=+0.065928824 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 16:02:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:02:59.223 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:02:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:02:59.225 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:02:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:02:59.227 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:02:59 compute-0 nova_compute[189485]: 2025-11-29 16:02:59.321 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:02:59 compute-0 podman[203677]: time="2025-11-29T16:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:02:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:02:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4789 "" "Go-http-client/1.1"
Nov 29 16:03:01 compute-0 nova_compute[189485]: 2025-11-29 16:03:01.070 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: ERROR   16:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: ERROR   16:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: ERROR   16:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: ERROR   16:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:03:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:03:04 compute-0 nova_compute[189485]: 2025-11-29 16:03:04.325 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:06 compute-0 nova_compute[189485]: 2025-11-29 16:03:06.073 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:09 compute-0 nova_compute[189485]: 2025-11-29 16:03:09.327 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:09 compute-0 podman[257777]: 2025-11-29 16:03:09.887117997 +0000 UTC m=+0.088130631 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:03:11 compute-0 nova_compute[189485]: 2025-11-29 16:03:11.075 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:14 compute-0 nova_compute[189485]: 2025-11-29 16:03:14.329 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:16 compute-0 nova_compute[189485]: 2025-11-29 16:03:16.078 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:19 compute-0 nova_compute[189485]: 2025-11-29 16:03:19.334 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:21 compute-0 nova_compute[189485]: 2025-11-29 16:03:21.080 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:21 compute-0 podman[257803]: 2025-11-29 16:03:21.67792385 +0000 UTC m=+0.111144351 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:03:21 compute-0 podman[257802]: 2025-11-29 16:03:21.682364749 +0000 UTC m=+0.117152542 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 16:03:21 compute-0 podman[257804]: 2025-11-29 16:03:21.700366423 +0000 UTC m=+0.125263310 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 16:03:21 compute-0 podman[257801]: 2025-11-29 16:03:21.704769181 +0000 UTC m=+0.131866657 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release=1214.1726694543, container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:03:21 compute-0 podman[257812]: 2025-11-29 16:03:21.72518891 +0000 UTC m=+0.118813155 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 16:03:21 compute-0 podman[257822]: 2025-11-29 16:03:21.732533768 +0000 UTC m=+0.128206479 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Nov 29 16:03:24 compute-0 nova_compute[189485]: 2025-11-29 16:03:24.337 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:26 compute-0 nova_compute[189485]: 2025-11-29 16:03:26.085 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:26 compute-0 podman[257920]: 2025-11-29 16:03:26.661082282 +0000 UTC m=+0.115128528 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:03:28 compute-0 podman[257939]: 2025-11-29 16:03:28.679475391 +0000 UTC m=+0.117844720 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.340 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.520 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.560 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.560 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.561 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.561 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.686 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:03:29 compute-0 podman[203677]: time="2025-11-29T16:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:03:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:03:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.823 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.137s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.825 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.904 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.915 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.979 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:03:29 compute-0 nova_compute[189485]: 2025-11-29 16:03:29.981 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.043 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.405 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.406 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4877MB free_disk=72.24908447265625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.407 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.407 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.491 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.491 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.492 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.492 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.564 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.583 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.586 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:03:30 compute-0 nova_compute[189485]: 2025-11-29 16:03:30.586 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.179s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:03:31 compute-0 nova_compute[189485]: 2025-11-29 16:03:31.088 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: ERROR   16:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: ERROR   16:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: ERROR   16:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: ERROR   16:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:03:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:03:31 compute-0 nova_compute[189485]: 2025-11-29 16:03:31.550 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:31 compute-0 nova_compute[189485]: 2025-11-29 16:03:31.550 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:03:32 compute-0 nova_compute[189485]: 2025-11-29 16:03:32.241 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 16:03:32 compute-0 nova_compute[189485]: 2025-11-29 16:03:32.242 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 16:03:32 compute-0 nova_compute[189485]: 2025-11-29 16:03:32.242 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 16:03:33 compute-0 nova_compute[189485]: 2025-11-29 16:03:33.849 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [{"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:03:33 compute-0 nova_compute[189485]: 2025-11-29 16:03:33.947 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-a1c56ffa-6d1c-408c-8667-517745513fd0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 16:03:33 compute-0 nova_compute[189485]: 2025-11-29 16:03:33.948 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 16:03:33 compute-0 nova_compute[189485]: 2025-11-29 16:03:33.949 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:33 compute-0 nova_compute[189485]: 2025-11-29 16:03:33.949 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:33 compute-0 nova_compute[189485]: 2025-11-29 16:03:33.949 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:34 compute-0 nova_compute[189485]: 2025-11-29 16:03:34.344 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:34 compute-0 nova_compute[189485]: 2025-11-29 16:03:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:34 compute-0 nova_compute[189485]: 2025-11-29 16:03:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:35 compute-0 nova_compute[189485]: 2025-11-29 16:03:35.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:36 compute-0 nova_compute[189485]: 2025-11-29 16:03:36.092 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:38 compute-0 nova_compute[189485]: 2025-11-29 16:03:38.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:38 compute-0 nova_compute[189485]: 2025-11-29 16:03:38.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:03:39 compute-0 nova_compute[189485]: 2025-11-29 16:03:39.349 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:40 compute-0 podman[257974]: 2025-11-29 16:03:40.684321286 +0000 UTC m=+0.113596376 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 16:03:41 compute-0 nova_compute[189485]: 2025-11-29 16:03:41.095 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:44 compute-0 nova_compute[189485]: 2025-11-29 16:03:44.354 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:46 compute-0 nova_compute[189485]: 2025-11-29 16:03:46.099 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:49 compute-0 nova_compute[189485]: 2025-11-29 16:03:49.358 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:51 compute-0 nova_compute[189485]: 2025-11-29 16:03:51.101 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:51 compute-0 nova_compute[189485]: 2025-11-29 16:03:51.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:03:52 compute-0 podman[258000]: 2025-11-29 16:03:52.65269982 +0000 UTC m=+0.088166072 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:03:52 compute-0 podman[258001]: 2025-11-29 16:03:52.675303458 +0000 UTC m=+0.107286625 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 16:03:52 compute-0 podman[258002]: 2025-11-29 16:03:52.676842109 +0000 UTC m=+0.104885420 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 16:03:52 compute-0 podman[257999]: 2025-11-29 16:03:52.682997866 +0000 UTC m=+0.122387973 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, release=1214.1726694543, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, version=9.4)
Nov 29 16:03:52 compute-0 podman[258004]: 2025-11-29 16:03:52.695785699 +0000 UTC m=+0.114183502 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, version=9.6, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 16:03:52 compute-0 podman[258003]: 2025-11-29 16:03:52.712242742 +0000 UTC m=+0.138362902 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Nov 29 16:03:54 compute-0 nova_compute[189485]: 2025-11-29 16:03:54.361 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:56 compute-0 nova_compute[189485]: 2025-11-29 16:03:56.102 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:57 compute-0 podman[258110]: 2025-11-29 16:03:57.689179255 +0000 UTC m=+0.128965709 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd)
Nov 29 16:03:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:03:59.224 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:03:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:03:59.225 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:03:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:03:59.226 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:03:59 compute-0 nova_compute[189485]: 2025-11-29 16:03:59.366 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:03:59 compute-0 podman[258129]: 2025-11-29 16:03:59.647744834 +0000 UTC m=+0.104977554 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:03:59 compute-0 podman[203677]: time="2025-11-29T16:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:03:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:03:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.065 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.065 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.065 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.082 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.074 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'name': 'te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.089 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'name': 'te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo', 'flavor': {'id': 'cde1daa0-956a-446c-a1eb-2046e0cd1fa7', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': '276c0a04-08bd-40bb-ad7b-a0be69fa4466'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'user_id': '997fde32c4f7472e87493536b60e7b64', 'hostId': 'ac36d33345ade693b829abb2bca40a4477a3393e803c609f4b25701a', 'status': 'active', 'metadata': {'metering.server_group': '4838e190-17b5-46fc-b5c5-64e289c1eccb'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.090 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.090 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.091 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-11-29T16:04:01.090717) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.098 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.104 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.105 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 nova_compute[189485]: 2025-11-29 16:04:01.105 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.105 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.106 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.106 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.106 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.106 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.107 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-11-29T16:04:01.106639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.108 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.109 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.109 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.109 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.109 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.109 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.110 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-11-29T16:04:01.109754) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.149 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/memory.usage volume: 42.37890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.183 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/memory.usage volume: 42.4140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.185 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.186 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.186 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes volume: 1976 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-11-29T16:04:01.185786) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.188 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.189 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.190 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-11-29T16:04:01.189418) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.192 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.192 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.192 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.192 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.193 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.193 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.194 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-11-29T16:04:01.192791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.195 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.195 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.195 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.195 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.195 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-11-29T16:04:01.195764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.273 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.274 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.332 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.332 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.334 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.335 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.335 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.336 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-11-29T16:04:01.336018) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.336 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.337 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.338 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.339 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.340 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.341 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.341 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.342 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-11-29T16:04:01.341961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.342 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.359 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.359 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.376 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.376 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.376 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/cpu volume: 339360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.377 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/cpu volume: 335700000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.378 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 569535603 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.379 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.latency volume: 64248485 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.379 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 667951938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-11-29T16:04:01.377438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.379 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-11-29T16:04:01.378723) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.379 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.latency volume: 71545186 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.380 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-11-29T16:04:01.380763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.381 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.381 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.381 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.381 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.381 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-11-29T16:04:01.382378) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.382 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.383 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.383 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.383 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.383 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.383 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.383 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.384 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.384 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.384 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.384 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.384 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-11-29T16:04:01.384083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.384 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 73154560 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.385 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.386 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-11-29T16:04:01.385906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.386 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 8838861137 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.386 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.386 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 3824252546 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.386 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.387 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-11-29T16:04:01.387501) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.388 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.389 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-11-29T16:04:01.388780) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.389 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 333 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.389 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.389 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 337 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.389 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-11-29T16:04:01.390415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.390 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.391 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-11-29T16:04:01.391687) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.392 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.393 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-11-29T16:04:01.392641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.393 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 30154752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.393 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.393 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.394 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-11-29T16:04:01.394367) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.395 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-11-29T16:04:01.395570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.396 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-11-29T16:04:01.396529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.397 14 DEBUG ceilometer.compute.pollsters [-] 2c879d1e-7499-4665-8880-438b30ff9d86/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.398 14 DEBUG ceilometer.compute.pollsters [-] a1c56ffa-6d1c-408c-8667-517745513fd0/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.398 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.398 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-11-29T16:04:01.397768) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.398 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.399 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.400 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:04:01.401 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: ERROR   16:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: ERROR   16:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: ERROR   16:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: ERROR   16:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:04:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:04:04 compute-0 nova_compute[189485]: 2025-11-29 16:04:04.369 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:06 compute-0 nova_compute[189485]: 2025-11-29 16:04:06.108 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:09 compute-0 nova_compute[189485]: 2025-11-29 16:04:09.374 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:11 compute-0 nova_compute[189485]: 2025-11-29 16:04:11.108 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:11 compute-0 podman[258153]: 2025-11-29 16:04:11.616357752 +0000 UTC m=+0.069965843 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:04:14 compute-0 nova_compute[189485]: 2025-11-29 16:04:14.378 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:16 compute-0 nova_compute[189485]: 2025-11-29 16:04:16.112 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:19 compute-0 nova_compute[189485]: 2025-11-29 16:04:19.383 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:21 compute-0 nova_compute[189485]: 2025-11-29 16:04:21.116 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:23 compute-0 podman[258179]: 2025-11-29 16:04:23.698211977 +0000 UTC m=+0.124844438 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0)
Nov 29 16:04:23 compute-0 podman[258193]: 2025-11-29 16:04:23.713114688 +0000 UTC m=+0.116429502 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_id=edpm)
Nov 29 16:04:23 compute-0 podman[258178]: 2025-11-29 16:04:23.716322574 +0000 UTC m=+0.150035376 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, architecture=x86_64, io.openshift.expose-services=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Nov 29 16:04:23 compute-0 podman[258180]: 2025-11-29 16:04:23.725428179 +0000 UTC m=+0.142477983 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:04:23 compute-0 podman[258181]: 2025-11-29 16:04:23.726469838 +0000 UTC m=+0.133424630 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 16:04:23 compute-0 podman[258188]: 2025-11-29 16:04:23.73957914 +0000 UTC m=+0.144795136 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:04:24 compute-0 nova_compute[189485]: 2025-11-29 16:04:24.385 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:26 compute-0 nova_compute[189485]: 2025-11-29 16:04:26.120 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:28 compute-0 podman[258289]: 2025-11-29 16:04:28.722291989 +0000 UTC m=+0.161122083 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 16:04:29 compute-0 nova_compute[189485]: 2025-11-29 16:04:29.389 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:29 compute-0 podman[203677]: time="2025-11-29T16:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:04:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29524 "" "Go-http-client/1.1"
Nov 29 16:04:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4795 "" "Go-http-client/1.1"
Nov 29 16:04:30 compute-0 nova_compute[189485]: 2025-11-29 16:04:30.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:30 compute-0 podman[258307]: 2025-11-29 16:04:30.69945891 +0000 UTC m=+0.138005353 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.123 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: ERROR   16:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: ERROR   16:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: ERROR   16:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: ERROR   16:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:04:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.763 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.764 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquired lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.765 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Nov 29 16:04:31 compute-0 nova_compute[189485]: 2025-11-29 16:04:31.765 189489 DEBUG nova.objects.instance [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.624 189489 DEBUG nova.network.neutron [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [{"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.644 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Releasing lock "refresh_cache-2c879d1e-7499-4665-8880-438b30ff9d86" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.644 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.645 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.646 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.684 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.685 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.686 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.686 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.770 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.879 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.109s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.880 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.942 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:04:33 compute-0 nova_compute[189485]: 2025-11-29 16:04:33.948 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.011 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.011 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.067 189489 DEBUG oslo_concurrency.processutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.392 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.452 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.454 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4880MB free_disk=72.24908447265625GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.454 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.454 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.585 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance 2c879d1e-7499-4665-8880-438b30ff9d86 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.585 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Instance a1c56ffa-6d1c-408c-8667-517745513fd0 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.586 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.586 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.682 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.704 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.706 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:04:34 compute-0 nova_compute[189485]: 2025-11-29 16:04:34.707 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.253s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:35 compute-0 nova_compute[189485]: 2025-11-29 16:04:35.545 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:35 compute-0 nova_compute[189485]: 2025-11-29 16:04:35.546 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:35 compute-0 nova_compute[189485]: 2025-11-29 16:04:35.547 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:36 compute-0 nova_compute[189485]: 2025-11-29 16:04:36.128 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:36 compute-0 nova_compute[189485]: 2025-11-29 16:04:36.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.127 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.128 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.129 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.129 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.130 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.132 189489 INFO nova.compute.manager [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Terminating instance#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.133 189489 DEBUG nova.compute.manager [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 16:04:37 compute-0 kernel: tap28ff21af-c2 (unregistering): left promiscuous mode
Nov 29 16:04:37 compute-0 NetworkManager[56360]: <info>  [1764432277.1866] device (tap28ff21af-c2): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.206 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 ovn_controller[97827]: 2025-11-29T16:04:37Z|00180|binding|INFO|Releasing lport 28ff21af-c272-489e-85c2-27ab6ad320db from this chassis (sb_readonly=0)
Nov 29 16:04:37 compute-0 ovn_controller[97827]: 2025-11-29T16:04:37Z|00181|binding|INFO|Setting lport 28ff21af-c272-489e-85c2-27ab6ad320db down in Southbound
Nov 29 16:04:37 compute-0 ovn_controller[97827]: 2025-11-29T16:04:37Z|00182|binding|INFO|Removing iface tap28ff21af-c2 ovn-installed in OVS
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.215 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.220 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:82:93:16 10.100.3.44'], port_security=['fa:16:3e:82:93:16 10.100.3.44'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.3.44/16', 'neutron:device_id': '2c879d1e-7499-4665-8880-438b30ff9d86', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5e134a6-ec2b-4ce9-9b80-87ce5b922531', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=517fd69e-9ef0-4dda-87e3-69c54b736518, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=28ff21af-c272-489e-85c2-27ab6ad320db) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.222 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 28ff21af-c272-489e-85c2-27ab6ad320db in datapath 7871c73c-0a09-4317-aff1-d5a297fb41ee unbound from our chassis#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.224 106713 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 7871c73c-0a09-4317-aff1-d5a297fb41ee#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.247 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Nov 29 16:04:37 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d0000000b.scope: Consumed 7min 17.969s CPU time.
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.266 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[a5757849-54ae-4c7f-b783-fffd971d9485]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:37 compute-0 systemd-machined[155802]: Machine qemu-12-instance-0000000b terminated.
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.323 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[0e5c7701-e292-4761-88ae-acc986f3f448]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.328 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[3550b8d3-2851-42f3-859b-c63913b8229c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.366 239871 DEBUG oslo.privsep.daemon [-] privsep: reply[88f9bdc9-2263-4ac4-a6b0-7f13658dede4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.386 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[030c6818-c71b-4a39-b90b-5165576a31ce]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap7871c73c-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e8:cd:76'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 7, 'rx_bytes': 1960, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 38], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527242, 'reachable_time': 29368, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258360, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.407 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[9a985068-4bce-4628-a165-f63c51202c07]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tap7871c73c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527251, 'tstamp': 527251}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258366, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tap7871c73c-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 527254, 'tstamp': 527254}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258366, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.408 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7871c73c-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.410 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.417 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.417 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7871c73c-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.418 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.418 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap7871c73c-00, col_values=(('external_ids', {'iface-id': '44ccce0e-f764-41d1-8796-ff08932a6de2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:37 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:37.418 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.431 189489 INFO nova.virt.libvirt.driver [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Instance destroyed successfully.#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.431 189489 DEBUG nova.objects.instance [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lazy-loading 'resources' on Instance uuid 2c879d1e-7499-4665-8880-438b30ff9d86 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.435 189489 DEBUG nova.compute.manager [req-a2ddc87d-d33d-4dde-9f9c-d088cfe45c75 req-1842344e-3d7f-4964-9bbb-74c4fda5eb18 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-vif-unplugged-28ff21af-c272-489e-85c2-27ab6ad320db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.435 189489 DEBUG oslo_concurrency.lockutils [req-a2ddc87d-d33d-4dde-9f9c-d088cfe45c75 req-1842344e-3d7f-4964-9bbb-74c4fda5eb18 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.435 189489 DEBUG oslo_concurrency.lockutils [req-a2ddc87d-d33d-4dde-9f9c-d088cfe45c75 req-1842344e-3d7f-4964-9bbb-74c4fda5eb18 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.436 189489 DEBUG oslo_concurrency.lockutils [req-a2ddc87d-d33d-4dde-9f9c-d088cfe45c75 req-1842344e-3d7f-4964-9bbb-74c4fda5eb18 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.436 189489 DEBUG nova.compute.manager [req-a2ddc87d-d33d-4dde-9f9c-d088cfe45c75 req-1842344e-3d7f-4964-9bbb-74c4fda5eb18 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] No waiting events found dispatching network-vif-unplugged-28ff21af-c272-489e-85c2-27ab6ad320db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.436 189489 DEBUG nova.compute.manager [req-a2ddc87d-d33d-4dde-9f9c-d088cfe45c75 req-1842344e-3d7f-4964-9bbb-74c4fda5eb18 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-vif-unplugged-28ff21af-c272-489e-85c2-27ab6ad320db for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.451 189489 DEBUG nova.virt.libvirt.vif [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:51:47Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4649176-asg-evbjnyvcrawq-rkyrvun662rw-dja4nv6xx2xl',id=11,image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:51:57Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='4838e190-17b5-46fc-b5c5-64e289c1eccb'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb266773cd4c4eb0904e7249f2b6cb92',ramdisk_id='',reservation_id='r-ljx3hz30',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-739897620',owner_user_name='tempest-PrometheusGabbiTest-739897620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:51:57Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='997fde32c4f7472e87493536b60e7b64',uuid=2c879d1e-7499-4665-8880-438b30ff9d86,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.451 189489 DEBUG nova.network.os_vif_util [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converting VIF {"id": "28ff21af-c272-489e-85c2-27ab6ad320db", "address": "fa:16:3e:82:93:16", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.3.44", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap28ff21af-c2", "ovs_interfaceid": "28ff21af-c272-489e-85c2-27ab6ad320db", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.452 189489 DEBUG nova.network.os_vif_util [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.452 189489 DEBUG os_vif [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.454 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.454 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap28ff21af-c2, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.456 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.458 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.460 189489 INFO os_vif [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:82:93:16,bridge_name='br-int',has_traffic_filtering=True,id=28ff21af-c272-489e-85c2-27ab6ad320db,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap28ff21af-c2')#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.461 189489 INFO nova.virt.libvirt.driver [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Deleting instance files /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86_del#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.461 189489 INFO nova.virt.libvirt.driver [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Deletion of /var/lib/nova/instances/2c879d1e-7499-4665-8880-438b30ff9d86_del complete#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.525 189489 INFO nova.compute.manager [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Took 0.39 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.525 189489 DEBUG oslo.service.loopingcall [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.527 189489 DEBUG nova.compute.manager [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 16:04:37 compute-0 nova_compute[189485]: 2025-11-29 16:04:37.527 189489 DEBUG nova.network.neutron [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 16:04:38 compute-0 nova_compute[189485]: 2025-11-29 16:04:38.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:04:38 compute-0 nova_compute[189485]: 2025-11-29 16:04:38.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.573 189489 DEBUG nova.compute.manager [req-4c5fca9d-f28a-4edd-a1ac-f990903cc527 req-265901c7-60f7-4036-9878-11b128356275 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.574 189489 DEBUG oslo_concurrency.lockutils [req-4c5fca9d-f28a-4edd-a1ac-f990903cc527 req-265901c7-60f7-4036-9878-11b128356275 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.575 189489 DEBUG oslo_concurrency.lockutils [req-4c5fca9d-f28a-4edd-a1ac-f990903cc527 req-265901c7-60f7-4036-9878-11b128356275 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.576 189489 DEBUG oslo_concurrency.lockutils [req-4c5fca9d-f28a-4edd-a1ac-f990903cc527 req-265901c7-60f7-4036-9878-11b128356275 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.577 189489 DEBUG nova.compute.manager [req-4c5fca9d-f28a-4edd-a1ac-f990903cc527 req-265901c7-60f7-4036-9878-11b128356275 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] No waiting events found dispatching network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.578 189489 WARNING nova.compute.manager [req-4c5fca9d-f28a-4edd-a1ac-f990903cc527 req-265901c7-60f7-4036-9878-11b128356275 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received unexpected event network-vif-plugged-28ff21af-c272-489e-85c2-27ab6ad320db for instance with vm_state active and task_state deleting.#033[00m
Nov 29 16:04:40 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:40.615 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.616 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:40 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:40.617 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.668 189489 DEBUG nova.network.neutron [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.686 189489 INFO nova.compute.manager [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Took 3.16 seconds to deallocate network for instance.#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.728 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.730 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.849 189489 DEBUG nova.compute.provider_tree [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.874 189489 DEBUG nova.scheduler.client.report [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.899 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:40 compute-0 nova_compute[189485]: 2025-11-29 16:04:40.932 189489 INFO nova.scheduler.client.report [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Deleted allocations for instance 2c879d1e-7499-4665-8880-438b30ff9d86#033[00m
Nov 29 16:04:41 compute-0 nova_compute[189485]: 2025-11-29 16:04:41.017 189489 DEBUG oslo_concurrency.lockutils [None req-28271ca1-36d7-4d44-860c-ff01c91de2a9 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "2c879d1e-7499-4665-8880-438b30ff9d86" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.889s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:41 compute-0 nova_compute[189485]: 2025-11-29 16:04:41.132 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:42 compute-0 nova_compute[189485]: 2025-11-29 16:04:42.457 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:42 compute-0 podman[258376]: 2025-11-29 16:04:42.702186786 +0000 UTC m=+0.135287379 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:04:42 compute-0 nova_compute[189485]: 2025-11-29 16:04:42.739 189489 DEBUG nova.compute.manager [req-b9773642-19cc-40a5-9b4a-241800ec3d9c req-d03f2518-29f1-44f9-9474-3014792ddfe8 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Received event network-vif-deleted-28ff21af-c272-489e-85c2-27ab6ad320db external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 16:04:46 compute-0 nova_compute[189485]: 2025-11-29 16:04:46.135 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.460 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.729 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.729 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.730 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.730 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.730 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.732 189489 INFO nova.compute.manager [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Terminating instance#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.733 189489 DEBUG nova.compute.manager [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Nov 29 16:04:47 compute-0 kernel: tap05c6eb06-b3 (unregistering): left promiscuous mode
Nov 29 16:04:47 compute-0 NetworkManager[56360]: <info>  [1764432287.7722] device (tap05c6eb06-b3): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.784 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:47 compute-0 ovn_controller[97827]: 2025-11-29T16:04:47Z|00183|binding|INFO|Releasing lport 05c6eb06-b3ad-4a74-8b52-5aa37a365626 from this chassis (sb_readonly=0)
Nov 29 16:04:47 compute-0 ovn_controller[97827]: 2025-11-29T16:04:47Z|00184|binding|INFO|Setting lport 05c6eb06-b3ad-4a74-8b52-5aa37a365626 down in Southbound
Nov 29 16:04:47 compute-0 ovn_controller[97827]: 2025-11-29T16:04:47Z|00185|binding|INFO|Removing iface tap05c6eb06-b3 ovn-installed in OVS
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.791 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:47.795 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0e:87:f3 10.100.0.182'], port_security=['fa:16:3e:0e:87:f3 10.100.0.182'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.182/16', 'neutron:device_id': 'a1c56ffa-6d1c-408c-8667-517745513fd0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'cb266773cd4c4eb0904e7249f2b6cb92', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'b5e134a6-ec2b-4ce9-9b80-87ce5b922531', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=517fd69e-9ef0-4dda-87e3-69c54b736518, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>], logical_port=05c6eb06-b3ad-4a74-8b52-5aa37a365626) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fcffd90c6a0>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 16:04:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:47.796 106713 INFO neutron.agent.ovn.metadata.agent [-] Port 05c6eb06-b3ad-4a74-8b52-5aa37a365626 in datapath 7871c73c-0a09-4317-aff1-d5a297fb41ee unbound from our chassis#033[00m
Nov 29 16:04:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:47.796 106713 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 7871c73c-0a09-4317-aff1-d5a297fb41ee, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Nov 29 16:04:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:47.798 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[061ba4e5-3cfb-42fb-a84d-03ce0112f8fd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:47 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:47.798 106713 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee namespace which is not needed anymore#033[00m
Nov 29 16:04:47 compute-0 nova_compute[189485]: 2025-11-29 16:04:47.827 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:47 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Nov 29 16:04:47 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 6min 53.201s CPU time.
Nov 29 16:04:47 compute-0 systemd-machined[155802]: Machine qemu-15-instance-0000000e terminated.
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.004 189489 INFO nova.virt.libvirt.driver [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Instance destroyed successfully.#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.006 189489 DEBUG nova.objects.instance [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lazy-loading 'resources' on Instance uuid a1c56ffa-6d1c-408c-8667-517745513fd0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Nov 29 16:04:48 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [NOTICE]   (252807) : haproxy version is 2.8.14-c23fe91
Nov 29 16:04:48 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [NOTICE]   (252807) : path to executable is /usr/sbin/haproxy
Nov 29 16:04:48 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [WARNING]  (252807) : Exiting Master process...
Nov 29 16:04:48 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [WARNING]  (252807) : Exiting Master process...
Nov 29 16:04:48 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [ALERT]    (252807) : Current worker (252809) exited with code 143 (Terminated)
Nov 29 16:04:48 compute-0 neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee[252797]: [WARNING]  (252807) : All workers exited. Exiting... (0)
Nov 29 16:04:48 compute-0 systemd[1]: libpod-2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7.scope: Deactivated successfully.
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.023 189489 DEBUG nova.virt.libvirt.vif [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-11-29T15:54:40Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-4649176-asg-evbjnyvcrawq-m4ghe4cradlm-4dergds4xuxo',id=14,image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-11-29T15:54:49Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='4838e190-17b5-46fc-b5c5-64e289c1eccb'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='cb266773cd4c4eb0904e7249f2b6cb92',ramdisk_id='',reservation_id='r-n6js5k2r',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='276c0a04-08bd-40bb-ad7b-a0be69fa4466',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-739897620',owner_user_name='tempest-PrometheusGabbiTest-739897620-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-11-29T15:54:49Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='997fde32c4f7472e87493536b60e7b64',uuid=a1c56ffa-6d1c-408c-8667-517745513fd0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.024 189489 DEBUG nova.network.os_vif_util [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converting VIF {"id": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "address": "fa:16:3e:0e:87:f3", "network": {"id": "7871c73c-0a09-4317-aff1-d5a297fb41ee", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.182", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "cb266773cd4c4eb0904e7249f2b6cb92", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap05c6eb06-b3", "ovs_interfaceid": "05c6eb06-b3ad-4a74-8b52-5aa37a365626", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.025 189489 DEBUG nova.network.os_vif_util [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.025 189489 DEBUG os_vif [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.027 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.027 189489 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap05c6eb06-b3, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:48 compute-0 podman[258423]: 2025-11-29 16:04:48.027993493 +0000 UTC m=+0.082984772 container died 2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3)
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.029 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.032 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.038 189489 INFO os_vif [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:0e:87:f3,bridge_name='br-int',has_traffic_filtering=True,id=05c6eb06-b3ad-4a74-8b52-5aa37a365626,network=Network(7871c73c-0a09-4317-aff1-d5a297fb41ee),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap05c6eb06-b3')#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.040 189489 INFO nova.virt.libvirt.driver [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Deleting instance files /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0_del#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.041 189489 INFO nova.virt.libvirt.driver [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Deletion of /var/lib/nova/instances/a1c56ffa-6d1c-408c-8667-517745513fd0_del complete#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.064 189489 DEBUG nova.compute.manager [req-bed83e7d-edf6-418c-b3c8-1bbda5636c82 req-cc5a0afd-5d02-4f10-b2b1-e3c1e835cec5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-vif-unplugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.065 189489 DEBUG oslo_concurrency.lockutils [req-bed83e7d-edf6-418c-b3c8-1bbda5636c82 req-cc5a0afd-5d02-4f10-b2b1-e3c1e835cec5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.066 189489 DEBUG oslo_concurrency.lockutils [req-bed83e7d-edf6-418c-b3c8-1bbda5636c82 req-cc5a0afd-5d02-4f10-b2b1-e3c1e835cec5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.066 189489 DEBUG oslo_concurrency.lockutils [req-bed83e7d-edf6-418c-b3c8-1bbda5636c82 req-cc5a0afd-5d02-4f10-b2b1-e3c1e835cec5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.067 189489 DEBUG nova.compute.manager [req-bed83e7d-edf6-418c-b3c8-1bbda5636c82 req-cc5a0afd-5d02-4f10-b2b1-e3c1e835cec5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] No waiting events found dispatching network-vif-unplugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 16:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7-userdata-shm.mount: Deactivated successfully.
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.068 189489 DEBUG nova.compute.manager [req-bed83e7d-edf6-418c-b3c8-1bbda5636c82 req-cc5a0afd-5d02-4f10-b2b1-e3c1e835cec5 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-vif-unplugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Nov 29 16:04:48 compute-0 systemd[1]: var-lib-containers-storage-overlay-93b3e9b9ca697d8b61296997940e032b66094a616806238f6283bf74cb18cde1-merged.mount: Deactivated successfully.
Nov 29 16:04:48 compute-0 podman[258423]: 2025-11-29 16:04:48.083161486 +0000 UTC m=+0.138152755 container cleanup 2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:04:48 compute-0 systemd[1]: libpod-conmon-2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7.scope: Deactivated successfully.
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.102 189489 INFO nova.compute.manager [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Took 0.37 seconds to destroy the instance on the hypervisor.#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.103 189489 DEBUG oslo.service.loopingcall [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.103 189489 DEBUG nova.compute.manager [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.103 189489 DEBUG nova.network.neutron [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Nov 29 16:04:48 compute-0 podman[258465]: 2025-11-29 16:04:48.160858606 +0000 UTC m=+0.050587601 container remove 2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.170 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[d5e2184b-29db-4c58-9438-e67472e4368f]: (4, ('Sat Nov 29 04:04:47 PM UTC 2025 Stopping container neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee (2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7)\n2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7\nSat Nov 29 04:04:48 PM UTC 2025 Deleting container neutron-haproxy-ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee (2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7)\n2b0d95c9f6bde635ec6030cabf87dbdb3a12e203e95882230265acc552054cc7\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.172 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[5edd870e-8c61-44f0-b15a-9bae4c465458]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.173 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7871c73c-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.175 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:48 compute-0 kernel: tap7871c73c-00: left promiscuous mode
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.181 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.184 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[536ec59a-d4d0-454a-8364-adf1aa5b5159]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 nova_compute[189485]: 2025-11-29 16:04:48.194 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.205 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[157ab3b6-1191-4cc7-894f-b7eccf3f4207]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.206 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[1ccef46e-c9de-40ae-be88-537175e5a762]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.223 239830 DEBUG oslo.privsep.daemon [-] privsep: reply[fe11e75a-82ba-449a-8997-f814d7a9a358]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 527234, 'reachable_time': 16867, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258479, 'error': None, 'target': 'ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.226 106819 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-7871c73c-0a09-4317-aff1-d5a297fb41ee deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Nov 29 16:04:48 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:48.226 106819 DEBUG oslo.privsep.daemon [-] privsep: reply[8c56f545-0abb-45d9-b833-5691de942ded]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Nov 29 16:04:48 compute-0 systemd[1]: run-netns-ovnmeta\x2d7871c73c\x2d0a09\x2d4317\x2daff1\x2dd5a297fb41ee.mount: Deactivated successfully.
Nov 29 16:04:50 compute-0 nova_compute[189485]: 2025-11-29 16:04:50.147 189489 DEBUG nova.compute.manager [req-fc4737d7-51f3-42a5-b64d-36122e04bbcb req-8b7aaf67-a381-47d3-aeac-66bf7ade8da0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 16:04:50 compute-0 nova_compute[189485]: 2025-11-29 16:04:50.147 189489 DEBUG oslo_concurrency.lockutils [req-fc4737d7-51f3-42a5-b64d-36122e04bbcb req-8b7aaf67-a381-47d3-aeac-66bf7ade8da0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Acquiring lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:50 compute-0 nova_compute[189485]: 2025-11-29 16:04:50.147 189489 DEBUG oslo_concurrency.lockutils [req-fc4737d7-51f3-42a5-b64d-36122e04bbcb req-8b7aaf67-a381-47d3-aeac-66bf7ade8da0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:50 compute-0 nova_compute[189485]: 2025-11-29 16:04:50.148 189489 DEBUG oslo_concurrency.lockutils [req-fc4737d7-51f3-42a5-b64d-36122e04bbcb req-8b7aaf67-a381-47d3-aeac-66bf7ade8da0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:50 compute-0 nova_compute[189485]: 2025-11-29 16:04:50.148 189489 DEBUG nova.compute.manager [req-fc4737d7-51f3-42a5-b64d-36122e04bbcb req-8b7aaf67-a381-47d3-aeac-66bf7ade8da0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] No waiting events found dispatching network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Nov 29 16:04:50 compute-0 nova_compute[189485]: 2025-11-29 16:04:50.148 189489 WARNING nova.compute.manager [req-fc4737d7-51f3-42a5-b64d-36122e04bbcb req-8b7aaf67-a381-47d3-aeac-66bf7ade8da0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received unexpected event network-vif-plugged-05c6eb06-b3ad-4a74-8b52-5aa37a365626 for instance with vm_state active and task_state deleting.#033[00m
Nov 29 16:04:50 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:50.621 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.138 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.189 189489 DEBUG nova.network.neutron [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.214 189489 INFO nova.compute.manager [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Took 3.11 seconds to deallocate network for instance.#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.286 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.287 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.398 189489 DEBUG nova.compute.provider_tree [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.417 189489 DEBUG nova.scheduler.client.report [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.443 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.157s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.482 189489 INFO nova.scheduler.client.report [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Deleted allocations for instance a1c56ffa-6d1c-408c-8667-517745513fd0#033[00m
Nov 29 16:04:51 compute-0 nova_compute[189485]: 2025-11-29 16:04:51.562 189489 DEBUG oslo_concurrency.lockutils [None req-1b2a232d-7bee-436c-8517-9bc9fdf33fa3 997fde32c4f7472e87493536b60e7b64 cb266773cd4c4eb0904e7249f2b6cb92 - - default default] Lock "a1c56ffa-6d1c-408c-8667-517745513fd0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.833s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:52 compute-0 nova_compute[189485]: 2025-11-29 16:04:52.234 189489 DEBUG nova.compute.manager [req-b4650bb3-b01f-46d6-9c2f-1dac8fa71176 req-845323d5-d162-4cf7-9626-75cbedaafce0 909053e1d5334b56ba6acbca8f0aeaf5 c2f5879c6e094488bab1bf871b4654ee - - default default] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Received event network-vif-deleted-05c6eb06-b3ad-4a74-8b52-5aa37a365626 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Nov 29 16:04:52 compute-0 nova_compute[189485]: 2025-11-29 16:04:52.429 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764432277.4283695, 2c879d1e-7499-4665-8880-438b30ff9d86 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 16:04:52 compute-0 nova_compute[189485]: 2025-11-29 16:04:52.430 189489 INFO nova.compute.manager [-] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] VM Stopped (Lifecycle Event)#033[00m
Nov 29 16:04:52 compute-0 nova_compute[189485]: 2025-11-29 16:04:52.460 189489 DEBUG nova.compute.manager [None req-aa82824a-6317-4973-82c3-61cc2c3384e2 - - - - - -] [instance: 2c879d1e-7499-4665-8880-438b30ff9d86] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 16:04:53 compute-0 nova_compute[189485]: 2025-11-29 16:04:53.031 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:54 compute-0 podman[258482]: 2025-11-29 16:04:54.677471529 +0000 UTC m=+0.113906104 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:04:54 compute-0 podman[258483]: 2025-11-29 16:04:54.680199552 +0000 UTC m=+0.113696828 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:04:54 compute-0 podman[258496]: 2025-11-29 16:04:54.682471964 +0000 UTC m=+0.095358096 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Nov 29 16:04:54 compute-0 podman[258481]: 2025-11-29 16:04:54.683113171 +0000 UTC m=+0.125814385 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, release=1214.1726694543, config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9, architecture=x86_64)
Nov 29 16:04:54 compute-0 podman[258484]: 2025-11-29 16:04:54.711299708 +0000 UTC m=+0.139228244 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 29 16:04:54 compute-0 podman[258490]: 2025-11-29 16:04:54.731041789 +0000 UTC m=+0.143630813 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:04:56 compute-0 nova_compute[189485]: 2025-11-29 16:04:56.141 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:58 compute-0 nova_compute[189485]: 2025-11-29 16:04:58.035 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:04:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:59.225 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:04:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:59.227 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:04:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:04:59.228 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:04:59 compute-0 podman[258593]: 2025-11-29 16:04:59.661251969 +0000 UTC m=+0.114305355 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:04:59 compute-0 podman[203677]: time="2025-11-29T16:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:04:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:04:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4329 "" "Go-http-client/1.1"
Nov 29 16:05:01 compute-0 nova_compute[189485]: 2025-11-29 16:05:01.144 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: ERROR   16:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: ERROR   16:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: ERROR   16:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: ERROR   16:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:05:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:05:01 compute-0 podman[258614]: 2025-11-29 16:05:01.669399094 +0000 UTC m=+0.109816365 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 16:05:03 compute-0 nova_compute[189485]: 2025-11-29 16:05:03.002 189489 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1764432288.0001228, a1c56ffa-6d1c-408c-8667-517745513fd0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Nov 29 16:05:03 compute-0 nova_compute[189485]: 2025-11-29 16:05:03.002 189489 INFO nova.compute.manager [-] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] VM Stopped (Lifecycle Event)#033[00m
Nov 29 16:05:03 compute-0 nova_compute[189485]: 2025-11-29 16:05:03.029 189489 DEBUG nova.compute.manager [None req-4e18713e-c761-4ec5-8a16-95ed49e65168 - - - - - -] [instance: a1c56ffa-6d1c-408c-8667-517745513fd0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Nov 29 16:05:03 compute-0 nova_compute[189485]: 2025-11-29 16:05:03.039 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:04 compute-0 nova_compute[189485]: 2025-11-29 16:05:04.265 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:06 compute-0 nova_compute[189485]: 2025-11-29 16:05:06.146 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:08 compute-0 nova_compute[189485]: 2025-11-29 16:05:08.042 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:11 compute-0 nova_compute[189485]: 2025-11-29 16:05:11.151 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:13 compute-0 nova_compute[189485]: 2025-11-29 16:05:13.046 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:13 compute-0 podman[258642]: 2025-11-29 16:05:13.680259254 +0000 UTC m=+0.114418458 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:05:16 compute-0 nova_compute[189485]: 2025-11-29 16:05:16.155 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:18 compute-0 nova_compute[189485]: 2025-11-29 16:05:18.049 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:21 compute-0 nova_compute[189485]: 2025-11-29 16:05:21.156 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:23 compute-0 nova_compute[189485]: 2025-11-29 16:05:23.052 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:25 compute-0 podman[258673]: 2025-11-29 16:05:25.679624296 +0000 UTC m=+0.098205152 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 16:05:25 compute-0 podman[258667]: 2025-11-29 16:05:25.679985186 +0000 UTC m=+0.110451921 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:05:25 compute-0 podman[258665]: 2025-11-29 16:05:25.690176961 +0000 UTC m=+0.130331296 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, architecture=x86_64, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, release=1214.1726694543, name=ubi9, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 16:05:25 compute-0 podman[258666]: 2025-11-29 16:05:25.700523868 +0000 UTC m=+0.134174479 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 16:05:25 compute-0 podman[258692]: 2025-11-29 16:05:25.707843435 +0000 UTC m=+0.108002975 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, architecture=x86_64)
Nov 29 16:05:25 compute-0 podman[258687]: 2025-11-29 16:05:25.726163248 +0000 UTC m=+0.139660127 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 16:05:26 compute-0 nova_compute[189485]: 2025-11-29 16:05:26.159 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:28 compute-0 nova_compute[189485]: 2025-11-29 16:05:28.056 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:29 compute-0 podman[203677]: time="2025-11-29T16:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:05:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:05:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Nov 29 16:05:30 compute-0 podman[258783]: 2025-11-29 16:05:30.65500804 +0000 UTC m=+0.111491229 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3)
Nov 29 16:05:31 compute-0 nova_compute[189485]: 2025-11-29 16:05:31.162 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: ERROR   16:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: ERROR   16:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: ERROR   16:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: ERROR   16:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:05:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:05:31 compute-0 nova_compute[189485]: 2025-11-29 16:05:31.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:31 compute-0 nova_compute[189485]: 2025-11-29 16:05:31.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:05:31 compute-0 nova_compute[189485]: 2025-11-29 16:05:31.521 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:05:32 compute-0 nova_compute[189485]: 2025-11-29 16:05:32.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:32 compute-0 nova_compute[189485]: 2025-11-29 16:05:32.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:32 compute-0 podman[258802]: 2025-11-29 16:05:32.668132339 +0000 UTC m=+0.111057497 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.059 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.533 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.534 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.534 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.534 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.870 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.872 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5360MB free_disk=72.30644226074219GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.872 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.872 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.934 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.934 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.953 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.965 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.983 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:05:33 compute-0 nova_compute[189485]: 2025-11-29 16:05:33.984 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:05:34 compute-0 nova_compute[189485]: 2025-11-29 16:05:34.983 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:35 compute-0 nova_compute[189485]: 2025-11-29 16:05:35.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:36 compute-0 nova_compute[189485]: 2025-11-29 16:05:36.166 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:36 compute-0 nova_compute[189485]: 2025-11-29 16:05:36.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:38 compute-0 nova_compute[189485]: 2025-11-29 16:05:38.063 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:39 compute-0 nova_compute[189485]: 2025-11-29 16:05:39.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:39 compute-0 nova_compute[189485]: 2025-11-29 16:05:39.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:05:41 compute-0 nova_compute[189485]: 2025-11-29 16:05:41.171 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:43 compute-0 nova_compute[189485]: 2025-11-29 16:05:43.066 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:44 compute-0 podman[258826]: 2025-11-29 16:05:44.695961367 +0000 UTC m=+0.126197125 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:05:46 compute-0 nova_compute[189485]: 2025-11-29 16:05:46.174 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:48 compute-0 nova_compute[189485]: 2025-11-29 16:05:48.070 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:48 compute-0 ovn_controller[97827]: 2025-11-29T16:05:48Z|00186|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Nov 29 16:05:51 compute-0 nova_compute[189485]: 2025-11-29 16:05:51.177 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:53 compute-0 nova_compute[189485]: 2025-11-29 16:05:53.075 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:53 compute-0 nova_compute[189485]: 2025-11-29 16:05:53.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:05:56 compute-0 nova_compute[189485]: 2025-11-29 16:05:56.179 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:56 compute-0 podman[258852]: 2025-11-29 16:05:56.639921437 +0000 UTC m=+0.076493677 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:05:56 compute-0 podman[258850]: 2025-11-29 16:05:56.663521492 +0000 UTC m=+0.113076691 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, container_name=kepler, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, distribution-scope=public, vcs-type=git, version=9.4, release-0.7.12=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release=1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0)
Nov 29 16:05:56 compute-0 podman[258851]: 2025-11-29 16:05:56.667839138 +0000 UTC m=+0.103334339 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Nov 29 16:05:56 compute-0 podman[258860]: 2025-11-29 16:05:56.676251695 +0000 UTC m=+0.102256511 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=minimal rhel9, release=1755695350, io.buildah.version=1.33.7, name=ubi9-minimal, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 16:05:56 compute-0 podman[258853]: 2025-11-29 16:05:56.68055817 +0000 UTC m=+0.115928988 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Nov 29 16:05:56 compute-0 podman[258859]: 2025-11-29 16:05:56.705278855 +0000 UTC m=+0.130165661 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:05:58 compute-0 nova_compute[189485]: 2025-11-29 16:05:58.078 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:05:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:05:59.227 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:05:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:05:59.228 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:05:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:05:59.228 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:05:59 compute-0 podman[203677]: time="2025-11-29T16:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:05:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:05:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.065 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.066 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.066 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.067 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.069 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f21940>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:06:01.079 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:06:01 compute-0 nova_compute[189485]: 2025-11-29 16:06:01.182 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: ERROR   16:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: ERROR   16:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: ERROR   16:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: ERROR   16:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:06:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:06:01 compute-0 podman[258963]: 2025-11-29 16:06:01.680876665 +0000 UTC m=+0.122434734 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 16:06:03 compute-0 nova_compute[189485]: 2025-11-29 16:06:03.081 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:03 compute-0 podman[258985]: 2025-11-29 16:06:03.674359196 +0000 UTC m=+0.111301024 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 16:06:06 compute-0 nova_compute[189485]: 2025-11-29 16:06:06.185 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:08 compute-0 nova_compute[189485]: 2025-11-29 16:06:08.086 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:11 compute-0 nova_compute[189485]: 2025-11-29 16:06:11.187 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:13 compute-0 nova_compute[189485]: 2025-11-29 16:06:13.090 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:14 compute-0 podman[259006]: 2025-11-29 16:06:14.803267157 +0000 UTC m=+0.066962292 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:06:16 compute-0 nova_compute[189485]: 2025-11-29 16:06:16.190 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:17 compute-0 systemd-logind[794]: New session 31 of user zuul.
Nov 29 16:06:17 compute-0 systemd[1]: Started Session 31 of User zuul.
Nov 29 16:06:18 compute-0 nova_compute[189485]: 2025-11-29 16:06:18.094 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:21 compute-0 nova_compute[189485]: 2025-11-29 16:06:21.192 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:22 compute-0 ovs-vsctl[259201]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 16:06:23 compute-0 nova_compute[189485]: 2025-11-29 16:06:23.097 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:23 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 259057 (sos)
Nov 29 16:06:23 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Nov 29 16:06:23 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Nov 29 16:06:23 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 16:06:23 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 16:06:24 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 16:06:26 compute-0 nova_compute[189485]: 2025-11-29 16:06:26.195 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:27 compute-0 podman[259720]: 2025-11-29 16:06:27.704977483 +0000 UTC m=+0.142994787 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Nov 29 16:06:27 compute-0 podman[259719]: 2025-11-29 16:06:27.719197506 +0000 UTC m=+0.140825019 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:06:27 compute-0 podman[259721]: 2025-11-29 16:06:27.721676252 +0000 UTC m=+0.139379329 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 16:06:27 compute-0 podman[259732]: 2025-11-29 16:06:27.726960204 +0000 UTC m=+0.138453944 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.expose-services=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:06:27 compute-0 podman[259731]: 2025-11-29 16:06:27.732473772 +0000 UTC m=+0.151207157 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Nov 29 16:06:27 compute-0 podman[259726]: 2025-11-29 16:06:27.75169064 +0000 UTC m=+0.164383963 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 29 16:06:27 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 16:06:27 compute-0 systemd[1]: Started Hostname Service.
Nov 29 16:06:28 compute-0 nova_compute[189485]: 2025-11-29 16:06:28.100 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:29 compute-0 podman[203677]: time="2025-11-29T16:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:06:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:06:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 29 16:06:31 compute-0 nova_compute[189485]: 2025-11-29 16:06:31.197 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: ERROR   16:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: ERROR   16:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: ERROR   16:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: ERROR   16:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:06:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:06:32 compute-0 nova_compute[189485]: 2025-11-29 16:06:32.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:32 compute-0 nova_compute[189485]: 2025-11-29 16:06:32.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:06:32 compute-0 nova_compute[189485]: 2025-11-29 16:06:32.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:06:32 compute-0 nova_compute[189485]: 2025-11-29 16:06:32.514 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:06:32 compute-0 podman[260258]: 2025-11-29 16:06:32.623975101 +0000 UTC m=+0.070361554 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, container_name=multipathd, org.label-schema.schema-version=1.0)
Nov 29 16:06:33 compute-0 nova_compute[189485]: 2025-11-29 16:06:33.102 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:33 compute-0 nova_compute[189485]: 2025-11-29 16:06:33.509 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:34 compute-0 nova_compute[189485]: 2025-11-29 16:06:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:34 compute-0 podman[260604]: 2025-11-29 16:06:34.651708333 +0000 UTC m=+0.091440170 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.524 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.524 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.904 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.905 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5064MB free_disk=72.0465316772461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.905 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:06:35 compute-0 nova_compute[189485]: 2025-11-29 16:06:35.905 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.055 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.055 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.086 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.118 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.119 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.119 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.214s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:06:36 compute-0 nova_compute[189485]: 2025-11-29 16:06:36.199 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:37 compute-0 nova_compute[189485]: 2025-11-29 16:06:37.119 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:37 compute-0 ovs-appctl[261119]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 16:06:37 compute-0 ovs-appctl[261128]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 16:06:37 compute-0 ovs-appctl[261134]: ovs|00001|daemon_unix|WARN|/var/run/openvswitch/ovs-monitor-ipsec.pid: open: No such file or directory
Nov 29 16:06:37 compute-0 nova_compute[189485]: 2025-11-29 16:06:37.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:38 compute-0 nova_compute[189485]: 2025-11-29 16:06:38.105 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:40 compute-0 nova_compute[189485]: 2025-11-29 16:06:40.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:06:40 compute-0 nova_compute[189485]: 2025-11-29 16:06:40.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:06:41 compute-0 nova_compute[189485]: 2025-11-29 16:06:41.201 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:43 compute-0 nova_compute[189485]: 2025-11-29 16:06:43.107 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:44 compute-0 podman[262101]: 2025-11-29 16:06:44.932894338 +0000 UTC m=+0.076688214 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:06:45 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 16:06:46 compute-0 nova_compute[189485]: 2025-11-29 16:06:46.204 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:47 compute-0 systemd[1]: Starting Time & Date Service...
Nov 29 16:06:47 compute-0 systemd[1]: Started Time & Date Service.
Nov 29 16:06:48 compute-0 nova_compute[189485]: 2025-11-29 16:06:48.110 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:51 compute-0 nova_compute[189485]: 2025-11-29 16:06:51.206 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:53 compute-0 nova_compute[189485]: 2025-11-29 16:06:53.115 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:56 compute-0 nova_compute[189485]: 2025-11-29 16:06:56.209 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:58 compute-0 nova_compute[189485]: 2025-11-29 16:06:58.117 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:06:58 compute-0 podman[262561]: 2025-11-29 16:06:58.674292797 +0000 UTC m=+0.099029364 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 16:06:58 compute-0 podman[262559]: 2025-11-29 16:06:58.703269907 +0000 UTC m=+0.122774644 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:06:58 compute-0 podman[262562]: 2025-11-29 16:06:58.709664318 +0000 UTC m=+0.110620546 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Nov 29 16:06:58 compute-0 podman[262560]: 2025-11-29 16:06:58.722250076 +0000 UTC m=+0.149929783 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:06:58 compute-0 podman[262579]: 2025-11-29 16:06:58.736925272 +0000 UTC m=+0.128298342 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git)
Nov 29 16:06:58 compute-0 podman[262569]: 2025-11-29 16:06:58.757776922 +0000 UTC m=+0.148208316 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Nov 29 16:06:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:06:59.229 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:06:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:06:59.229 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:06:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:06:59.230 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:06:59 compute-0 podman[203677]: time="2025-11-29T16:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:06:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:06:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 29 16:07:01 compute-0 nova_compute[189485]: 2025-11-29 16:07:01.216 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: ERROR   16:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: ERROR   16:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: ERROR   16:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: ERROR   16:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:07:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:07:03 compute-0 nova_compute[189485]: 2025-11-29 16:07:03.120 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:03 compute-0 podman[262675]: 2025-11-29 16:07:03.71218167 +0000 UTC m=+0.148966917 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 16:07:05 compute-0 podman[262693]: 2025-11-29 16:07:05.275521064 +0000 UTC m=+0.154114186 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:07:06 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Nov 29 16:07:06 compute-0 systemd[1]: session-31.scope: Consumed 1min 33.827s CPU time, 628.0M memory peak, read 228.1M from disk, written 36.8M to disk.
Nov 29 16:07:06 compute-0 systemd-logind[794]: Session 31 logged out. Waiting for processes to exit.
Nov 29 16:07:06 compute-0 systemd-logind[794]: Removed session 31.
Nov 29 16:07:06 compute-0 nova_compute[189485]: 2025-11-29 16:07:06.219 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:06 compute-0 systemd-logind[794]: New session 32 of user zuul.
Nov 29 16:07:06 compute-0 systemd[1]: Started Session 32 of User zuul.
Nov 29 16:07:06 compute-0 systemd[1]: session-32.scope: Deactivated successfully.
Nov 29 16:07:06 compute-0 systemd-logind[794]: Session 32 logged out. Waiting for processes to exit.
Nov 29 16:07:06 compute-0 systemd-logind[794]: Removed session 32.
Nov 29 16:07:06 compute-0 systemd-logind[794]: New session 33 of user zuul.
Nov 29 16:07:06 compute-0 systemd[1]: Started Session 33 of User zuul.
Nov 29 16:07:07 compute-0 systemd[1]: session-33.scope: Deactivated successfully.
Nov 29 16:07:07 compute-0 systemd-logind[794]: Session 33 logged out. Waiting for processes to exit.
Nov 29 16:07:07 compute-0 systemd-logind[794]: Removed session 33.
Nov 29 16:07:08 compute-0 nova_compute[189485]: 2025-11-29 16:07:08.125 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:11 compute-0 nova_compute[189485]: 2025-11-29 16:07:11.223 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:13 compute-0 nova_compute[189485]: 2025-11-29 16:07:13.130 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:15 compute-0 podman[262773]: 2025-11-29 16:07:15.651508577 +0000 UTC m=+0.097625026 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:07:16 compute-0 nova_compute[189485]: 2025-11-29 16:07:16.225 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:17 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Nov 29 16:07:17 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Nov 29 16:07:18 compute-0 nova_compute[189485]: 2025-11-29 16:07:18.134 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:21 compute-0 nova_compute[189485]: 2025-11-29 16:07:21.228 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:23 compute-0 nova_compute[189485]: 2025-11-29 16:07:23.138 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:26 compute-0 nova_compute[189485]: 2025-11-29 16:07:26.231 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:28 compute-0 nova_compute[189485]: 2025-11-29 16:07:28.142 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:29 compute-0 nova_compute[189485]: 2025-11-29 16:07:29.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:29 compute-0 nova_compute[189485]: 2025-11-29 16:07:29.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 16:07:29 compute-0 nova_compute[189485]: 2025-11-29 16:07:29.509 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 16:07:29 compute-0 podman[262803]: 2025-11-29 16:07:29.654337307 +0000 UTC m=+0.093889356 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Nov 29 16:07:29 compute-0 podman[262805]: 2025-11-29 16:07:29.673093302 +0000 UTC m=+0.106774342 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, version=9.6, architecture=x86_64, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Nov 29 16:07:29 compute-0 podman[262801]: 2025-11-29 16:07:29.691282161 +0000 UTC m=+0.120246585 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Nov 29 16:07:29 compute-0 podman[262802]: 2025-11-29 16:07:29.70239332 +0000 UTC m=+0.145404931 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3)
Nov 29 16:07:29 compute-0 podman[262800]: 2025-11-29 16:07:29.709493081 +0000 UTC m=+0.146623744 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, vendor=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, name=ubi9, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9)
Nov 29 16:07:29 compute-0 podman[262804]: 2025-11-29 16:07:29.712895992 +0000 UTC m=+0.145690369 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Nov 29 16:07:29 compute-0 podman[203677]: time="2025-11-29T16:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:07:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:07:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4323 "" "Go-http-client/1.1"
Nov 29 16:07:31 compute-0 nova_compute[189485]: 2025-11-29 16:07:31.234 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: ERROR   16:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: ERROR   16:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: ERROR   16:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: ERROR   16:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:07:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:07:33 compute-0 nova_compute[189485]: 2025-11-29 16:07:33.145 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:33 compute-0 nova_compute[189485]: 2025-11-29 16:07:33.510 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:33 compute-0 nova_compute[189485]: 2025-11-29 16:07:33.510 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:07:33 compute-0 nova_compute[189485]: 2025-11-29 16:07:33.511 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:07:33 compute-0 nova_compute[189485]: 2025-11-29 16:07:33.549 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:07:34 compute-0 nova_compute[189485]: 2025-11-29 16:07:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:34 compute-0 nova_compute[189485]: 2025-11-29 16:07:34.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:34 compute-0 podman[262914]: 2025-11-29 16:07:34.638037765 +0000 UTC m=+0.089487907 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.511 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.511 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.511 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.512 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:07:35 compute-0 podman[262932]: 2025-11-29 16:07:35.703379886 +0000 UTC m=+0.143927832 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.924 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.925 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5305MB free_disk=72.30617141723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.925 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:07:35 compute-0 nova_compute[189485]: 2025-11-29 16:07:35.925 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.238 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.238 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.243 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.400 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.525 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.526 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.553 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.574 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.604 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.633 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.635 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:07:36 compute-0 nova_compute[189485]: 2025-11-29 16:07:36.636 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.710s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:07:37 compute-0 nova_compute[189485]: 2025-11-29 16:07:37.637 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:37 compute-0 nova_compute[189485]: 2025-11-29 16:07:37.637 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:37 compute-0 nova_compute[189485]: 2025-11-29 16:07:37.638 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:37 compute-0 nova_compute[189485]: 2025-11-29 16:07:37.638 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:38 compute-0 nova_compute[189485]: 2025-11-29 16:07:38.149 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:40 compute-0 nova_compute[189485]: 2025-11-29 16:07:40.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:40 compute-0 nova_compute[189485]: 2025-11-29 16:07:40.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:07:41 compute-0 nova_compute[189485]: 2025-11-29 16:07:41.243 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:43 compute-0 nova_compute[189485]: 2025-11-29 16:07:43.154 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:46 compute-0 nova_compute[189485]: 2025-11-29 16:07:46.243 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:46 compute-0 podman[262955]: 2025-11-29 16:07:46.632938357 +0000 UTC m=+0.085802029 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 16:07:48 compute-0 nova_compute[189485]: 2025-11-29 16:07:48.157 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:51 compute-0 nova_compute[189485]: 2025-11-29 16:07:51.246 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:53 compute-0 nova_compute[189485]: 2025-11-29 16:07:53.161 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:55 compute-0 nova_compute[189485]: 2025-11-29 16:07:55.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:56 compute-0 nova_compute[189485]: 2025-11-29 16:07:56.249 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:58 compute-0 nova_compute[189485]: 2025-11-29 16:07:58.163 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:07:58 compute-0 nova_compute[189485]: 2025-11-29 16:07:58.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:07:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:07:59.231 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:07:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:07:59.231 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:07:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:07:59.231 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:07:59 compute-0 podman[203677]: time="2025-11-29T16:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:07:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:07:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 29 16:08:00 compute-0 podman[262982]: 2025-11-29 16:08:00.636421608 +0000 UTC m=+0.080107685 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Nov 29 16:08:00 compute-0 podman[262991]: 2025-11-29 16:08:00.67293977 +0000 UTC m=+0.104487540 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:08:00 compute-0 podman[262980]: 2025-11-29 16:08:00.673476495 +0000 UTC m=+0.119420553 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=, container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, io.buildah.version=1.29.0, distribution-scope=public, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, name=ubi9, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container)
Nov 29 16:08:00 compute-0 podman[262981]: 2025-11-29 16:08:00.675628163 +0000 UTC m=+0.109444395 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:08:00 compute-0 podman[262983]: 2025-11-29 16:08:00.680170605 +0000 UTC m=+0.106620709 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Nov 29 16:08:00 compute-0 podman[262984]: 2025-11-29 16:08:00.703535153 +0000 UTC m=+0.141264410 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.066 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.066 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.066 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.067 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.068 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.070 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.075 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.076 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.077 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:08:01.078 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:08:01 compute-0 nova_compute[189485]: 2025-11-29 16:08:01.252 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: ERROR   16:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: ERROR   16:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: ERROR   16:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: ERROR   16:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:08:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:08:03 compute-0 nova_compute[189485]: 2025-11-29 16:08:03.167 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:05 compute-0 podman[263095]: 2025-11-29 16:08:05.685542656 +0000 UTC m=+0.125066044 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 16:08:06 compute-0 nova_compute[189485]: 2025-11-29 16:08:06.255 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:06 compute-0 nova_compute[189485]: 2025-11-29 16:08:06.553 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:06 compute-0 nova_compute[189485]: 2025-11-29 16:08:06.554 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 16:08:06 compute-0 podman[263112]: 2025-11-29 16:08:06.664217785 +0000 UTC m=+0.105278882 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:08:08 compute-0 nova_compute[189485]: 2025-11-29 16:08:08.170 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:11 compute-0 nova_compute[189485]: 2025-11-29 16:08:11.257 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:13 compute-0 nova_compute[189485]: 2025-11-29 16:08:13.174 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:16 compute-0 nova_compute[189485]: 2025-11-29 16:08:16.259 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:17 compute-0 podman[263134]: 2025-11-29 16:08:17.68646632 +0000 UTC m=+0.127271964 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:08:18 compute-0 nova_compute[189485]: 2025-11-29 16:08:18.179 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:21 compute-0 nova_compute[189485]: 2025-11-29 16:08:21.263 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:23 compute-0 nova_compute[189485]: 2025-11-29 16:08:23.183 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:25 compute-0 nova_compute[189485]: 2025-11-29 16:08:25.238 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:26 compute-0 nova_compute[189485]: 2025-11-29 16:08:26.267 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:28 compute-0 nova_compute[189485]: 2025-11-29 16:08:28.189 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:29 compute-0 podman[203677]: time="2025-11-29T16:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:08:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:08:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Nov 29 16:08:31 compute-0 nova_compute[189485]: 2025-11-29 16:08:31.271 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: ERROR   16:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: ERROR   16:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: ERROR   16:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: ERROR   16:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:08:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:08:31 compute-0 podman[263159]: 2025-11-29 16:08:31.671901612 +0000 UTC m=+0.089515719 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Nov 29 16:08:31 compute-0 podman[263158]: 2025-11-29 16:08:31.693584395 +0000 UTC m=+0.114570752 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Nov 29 16:08:31 compute-0 podman[263160]: 2025-11-29 16:08:31.700547743 +0000 UTC m=+0.125475746 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 16:08:31 compute-0 podman[263177]: 2025-11-29 16:08:31.710051308 +0000 UTC m=+0.109822484 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, vcs-type=git, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, vendor=Red Hat, Inc.)
Nov 29 16:08:31 compute-0 podman[263157]: 2025-11-29 16:08:31.715072813 +0000 UTC m=+0.146470150 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_id=edpm, version=9.4, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, io.openshift.tags=base rhel9, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git)
Nov 29 16:08:31 compute-0 podman[263166]: 2025-11-29 16:08:31.780167243 +0000 UTC m=+0.186279220 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 16:08:33 compute-0 nova_compute[189485]: 2025-11-29 16:08:33.195 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:33 compute-0 nova_compute[189485]: 2025-11-29 16:08:33.510 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:33 compute-0 nova_compute[189485]: 2025-11-29 16:08:33.510 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:08:33 compute-0 nova_compute[189485]: 2025-11-29 16:08:33.510 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:08:33 compute-0 nova_compute[189485]: 2025-11-29 16:08:33.524 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:08:34 compute-0 nova_compute[189485]: 2025-11-29 16:08:34.493 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.516 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.517 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.517 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.517 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.844 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.845 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5336MB free_disk=72.30617141723633GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.846 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.846 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.940 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.940 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.971 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.988 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.989 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:08:35 compute-0 nova_compute[189485]: 2025-11-29 16:08:35.990 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:08:36 compute-0 nova_compute[189485]: 2025-11-29 16:08:36.282 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:36 compute-0 podman[263270]: 2025-11-29 16:08:36.619734272 +0000 UTC m=+0.073420245 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:08:36 compute-0 nova_compute[189485]: 2025-11-29 16:08:36.990 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:37 compute-0 nova_compute[189485]: 2025-11-29 16:08:37.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:37 compute-0 podman[263287]: 2025-11-29 16:08:37.674512497 +0000 UTC m=+0.117630612 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:08:38 compute-0 nova_compute[189485]: 2025-11-29 16:08:38.199 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:38 compute-0 nova_compute[189485]: 2025-11-29 16:08:38.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:38 compute-0 nova_compute[189485]: 2025-11-29 16:08:38.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:41 compute-0 nova_compute[189485]: 2025-11-29 16:08:41.277 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:41 compute-0 nova_compute[189485]: 2025-11-29 16:08:41.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:08:41 compute-0 nova_compute[189485]: 2025-11-29 16:08:41.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:08:43 compute-0 nova_compute[189485]: 2025-11-29 16:08:43.205 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:46 compute-0 nova_compute[189485]: 2025-11-29 16:08:46.281 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:48 compute-0 nova_compute[189485]: 2025-11-29 16:08:48.209 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:48 compute-0 podman[263310]: 2025-11-29 16:08:48.677492442 +0000 UTC m=+0.114584711 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:08:51 compute-0 nova_compute[189485]: 2025-11-29 16:08:51.284 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:53 compute-0 nova_compute[189485]: 2025-11-29 16:08:53.214 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:56 compute-0 nova_compute[189485]: 2025-11-29 16:08:56.286 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:58 compute-0 nova_compute[189485]: 2025-11-29 16:08:58.218 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:08:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:08:59.232 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:08:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:08:59.233 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:08:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:08:59.233 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:08:59 compute-0 podman[203677]: time="2025-11-29T16:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:08:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:08:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4330 "" "Go-http-client/1.1"
Nov 29 16:09:01 compute-0 nova_compute[189485]: 2025-11-29 16:09:01.289 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: ERROR   16:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: ERROR   16:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: ERROR   16:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: ERROR   16:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:09:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:09:02 compute-0 podman[263335]: 2025-11-29 16:09:02.651453194 +0000 UTC m=+0.087159213 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, config_id=edpm, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 16:09:02 compute-0 podman[263336]: 2025-11-29 16:09:02.679151429 +0000 UTC m=+0.105825116 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Nov 29 16:09:02 compute-0 podman[263333]: 2025-11-29 16:09:02.694981414 +0000 UTC m=+0.127668092 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, config_id=edpm, distribution-scope=public, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-type=git)
Nov 29 16:09:02 compute-0 podman[263348]: 2025-11-29 16:09:02.706627226 +0000 UTC m=+0.110148860 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, distribution-scope=public, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, architecture=x86_64, release=1755695350, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc.)
Nov 29 16:09:02 compute-0 podman[263334]: 2025-11-29 16:09:02.717950482 +0000 UTC m=+0.142174933 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:09:02 compute-0 podman[263337]: 2025-11-29 16:09:02.723798528 +0000 UTC m=+0.133227601 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 16:09:03 compute-0 nova_compute[189485]: 2025-11-29 16:09:03.222 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:06 compute-0 nova_compute[189485]: 2025-11-29 16:09:06.292 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:07 compute-0 podman[263445]: 2025-11-29 16:09:07.668247911 +0000 UTC m=+0.108071085 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0)
Nov 29 16:09:07 compute-0 podman[263463]: 2025-11-29 16:09:07.801730877 +0000 UTC m=+0.082978610 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:09:08 compute-0 nova_compute[189485]: 2025-11-29 16:09:08.227 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:11 compute-0 nova_compute[189485]: 2025-11-29 16:09:11.294 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:13 compute-0 nova_compute[189485]: 2025-11-29 16:09:13.230 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:16 compute-0 nova_compute[189485]: 2025-11-29 16:09:16.296 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:18 compute-0 nova_compute[189485]: 2025-11-29 16:09:18.234 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:19 compute-0 podman[263487]: 2025-11-29 16:09:19.720901801 +0000 UTC m=+0.162063366 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:09:21 compute-0 nova_compute[189485]: 2025-11-29 16:09:21.300 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:23 compute-0 nova_compute[189485]: 2025-11-29 16:09:23.239 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:26 compute-0 nova_compute[189485]: 2025-11-29 16:09:26.303 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:28 compute-0 nova_compute[189485]: 2025-11-29 16:09:28.243 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:29 compute-0 podman[203677]: time="2025-11-29T16:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:09:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:09:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Nov 29 16:09:31 compute-0 nova_compute[189485]: 2025-11-29 16:09:31.306 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: ERROR   16:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: ERROR   16:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: ERROR   16:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: ERROR   16:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:09:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:09:33 compute-0 nova_compute[189485]: 2025-11-29 16:09:33.247 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:33 compute-0 nova_compute[189485]: 2025-11-29 16:09:33.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:33 compute-0 nova_compute[189485]: 2025-11-29 16:09:33.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:09:33 compute-0 nova_compute[189485]: 2025-11-29 16:09:33.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:09:33 compute-0 nova_compute[189485]: 2025-11-29 16:09:33.502 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:09:33 compute-0 podman[263513]: 2025-11-29 16:09:33.65972881 +0000 UTC m=+0.090957076 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:09:33 compute-0 podman[263512]: 2025-11-29 16:09:33.681243897 +0000 UTC m=+0.117102488 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64)
Nov 29 16:09:33 compute-0 podman[263515]: 2025-11-29 16:09:33.693304452 +0000 UTC m=+0.116664637 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 16:09:33 compute-0 podman[263517]: 2025-11-29 16:09:33.694619236 +0000 UTC m=+0.113105159 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, vendor=Red Hat, Inc., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:09:33 compute-0 podman[263516]: 2025-11-29 16:09:33.720676757 +0000 UTC m=+0.138553904 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Nov 29 16:09:33 compute-0 podman[263514]: 2025-11-29 16:09:33.72377271 +0000 UTC m=+0.154733339 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS)
Nov 29 16:09:35 compute-0 nova_compute[189485]: 2025-11-29 16:09:35.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:36 compute-0 nova_compute[189485]: 2025-11-29 16:09:36.309 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:36 compute-0 nova_compute[189485]: 2025-11-29 16:09:36.478 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.514 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.515 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.515 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.515 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.825 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.826 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5342MB free_disk=72.30619049072266GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.826 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.826 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.902 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.903 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.938 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.959 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.961 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:09:37 compute-0 nova_compute[189485]: 2025-11-29 16:09:37.961 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.135s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:09:38 compute-0 nova_compute[189485]: 2025-11-29 16:09:38.250 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:38 compute-0 podman[263627]: 2025-11-29 16:09:38.667570216 +0000 UTC m=+0.110177282 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 16:09:38 compute-0 podman[263628]: 2025-11-29 16:09:38.676009452 +0000 UTC m=+0.110902341 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:09:38 compute-0 nova_compute[189485]: 2025-11-29 16:09:38.962 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:38 compute-0 nova_compute[189485]: 2025-11-29 16:09:38.962 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:38 compute-0 nova_compute[189485]: 2025-11-29 16:09:38.962 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:39 compute-0 nova_compute[189485]: 2025-11-29 16:09:39.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:41 compute-0 nova_compute[189485]: 2025-11-29 16:09:41.311 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:42 compute-0 nova_compute[189485]: 2025-11-29 16:09:42.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:42 compute-0 nova_compute[189485]: 2025-11-29 16:09:42.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:09:43 compute-0 nova_compute[189485]: 2025-11-29 16:09:43.254 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:46 compute-0 nova_compute[189485]: 2025-11-29 16:09:46.313 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:48 compute-0 nova_compute[189485]: 2025-11-29 16:09:48.257 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:50 compute-0 podman[263668]: 2025-11-29 16:09:50.687892089 +0000 UTC m=+0.123082869 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:09:51 compute-0 nova_compute[189485]: 2025-11-29 16:09:51.316 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:53 compute-0 nova_compute[189485]: 2025-11-29 16:09:53.260 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:55 compute-0 nova_compute[189485]: 2025-11-29 16:09:55.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:09:56 compute-0 nova_compute[189485]: 2025-11-29 16:09:56.320 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:58 compute-0 nova_compute[189485]: 2025-11-29 16:09:58.265 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:09:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:09:59.233 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:09:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:09:59.235 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:09:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:09:59.235 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:09:59 compute-0 podman[203677]: time="2025-11-29T16:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:09:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:09:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.068 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.069 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.069 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.070 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.071 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.073 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.072 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.073 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.076 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.076 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.077 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.085 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:10:01.086 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:10:01 compute-0 nova_compute[189485]: 2025-11-29 16:10:01.321 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: ERROR   16:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: ERROR   16:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: ERROR   16:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: ERROR   16:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:10:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:10:03 compute-0 nova_compute[189485]: 2025-11-29 16:10:03.270 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:04 compute-0 podman[263693]: 2025-11-29 16:10:04.635469753 +0000 UTC m=+0.086280780 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, container_name=kepler, version=9.4, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, release-0.7.12=, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 29 16:10:04 compute-0 podman[263696]: 2025-11-29 16:10:04.643981402 +0000 UTC m=+0.086824855 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 16:10:04 compute-0 podman[263708]: 2025-11-29 16:10:04.668924592 +0000 UTC m=+0.103750129 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., release=1755695350)
Nov 29 16:10:04 compute-0 podman[263694]: 2025-11-29 16:10:04.669324873 +0000 UTC m=+0.118113545 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 16:10:04 compute-0 podman[263695]: 2025-11-29 16:10:04.671158872 +0000 UTC m=+0.116676347 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:10:04 compute-0 podman[263700]: 2025-11-29 16:10:04.704769925 +0000 UTC m=+0.133351354 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 16:10:06 compute-0 nova_compute[189485]: 2025-11-29 16:10:06.324 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:08 compute-0 nova_compute[189485]: 2025-11-29 16:10:08.274 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:09 compute-0 podman[263802]: 2025-11-29 16:10:09.637764491 +0000 UTC m=+0.089676792 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:10:09 compute-0 podman[263803]: 2025-11-29 16:10:09.681370292 +0000 UTC m=+0.119348469 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 16:10:11 compute-0 nova_compute[189485]: 2025-11-29 16:10:11.326 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:13 compute-0 nova_compute[189485]: 2025-11-29 16:10:13.279 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:16 compute-0 nova_compute[189485]: 2025-11-29 16:10:16.329 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:18 compute-0 nova_compute[189485]: 2025-11-29 16:10:18.284 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:21 compute-0 nova_compute[189485]: 2025-11-29 16:10:21.331 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:21 compute-0 podman[263843]: 2025-11-29 16:10:21.682995763 +0000 UTC m=+0.128898225 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:10:23 compute-0 nova_compute[189485]: 2025-11-29 16:10:23.288 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:26 compute-0 nova_compute[189485]: 2025-11-29 16:10:26.334 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:28 compute-0 nova_compute[189485]: 2025-11-29 16:10:28.293 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:29 compute-0 podman[203677]: time="2025-11-29T16:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:10:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:10:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Nov 29 16:10:31 compute-0 nova_compute[189485]: 2025-11-29 16:10:31.338 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: ERROR   16:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: ERROR   16:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: ERROR   16:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: ERROR   16:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:10:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:10:33 compute-0 nova_compute[189485]: 2025-11-29 16:10:33.299 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:33 compute-0 nova_compute[189485]: 2025-11-29 16:10:33.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:33 compute-0 nova_compute[189485]: 2025-11-29 16:10:33.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:10:33 compute-0 nova_compute[189485]: 2025-11-29 16:10:33.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:10:33 compute-0 nova_compute[189485]: 2025-11-29 16:10:33.507 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:10:35 compute-0 podman[263867]: 2025-11-29 16:10:35.660739837 +0000 UTC m=+0.098238461 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 16:10:35 compute-0 podman[263868]: 2025-11-29 16:10:35.668357632 +0000 UTC m=+0.101011905 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Nov 29 16:10:35 compute-0 podman[263871]: 2025-11-29 16:10:35.689316756 +0000 UTC m=+0.108196189 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Nov 29 16:10:35 compute-0 podman[263869]: 2025-11-29 16:10:35.691507204 +0000 UTC m=+0.134017212 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:10:35 compute-0 podman[263866]: 2025-11-29 16:10:35.700007392 +0000 UTC m=+0.141406171 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30)
Nov 29 16:10:35 compute-0 podman[263870]: 2025-11-29 16:10:35.704691899 +0000 UTC m=+0.121584599 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Nov 29 16:10:36 compute-0 nova_compute[189485]: 2025-11-29 16:10:36.341 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:36 compute-0 nova_compute[189485]: 2025-11-29 16:10:36.504 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:37 compute-0 nova_compute[189485]: 2025-11-29 16:10:37.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:37 compute-0 nova_compute[189485]: 2025-11-29 16:10:37.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:37 compute-0 nova_compute[189485]: 2025-11-29 16:10:37.528 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:10:37 compute-0 nova_compute[189485]: 2025-11-29 16:10:37.529 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:10:37 compute-0 nova_compute[189485]: 2025-11-29 16:10:37.530 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:10:37 compute-0 nova_compute[189485]: 2025-11-29 16:10:37.530 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.076 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.078 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5333MB free_disk=72.30622100830078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.078 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.079 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.160 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.161 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.196 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.223 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.226 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.227 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:10:38 compute-0 nova_compute[189485]: 2025-11-29 16:10:38.304 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:40 compute-0 nova_compute[189485]: 2025-11-29 16:10:40.229 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:40 compute-0 podman[263979]: 2025-11-29 16:10:40.337258079 +0000 UTC m=+0.067236947 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 16:10:40 compute-0 podman[263978]: 2025-11-29 16:10:40.344804803 +0000 UTC m=+0.081678716 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 16:10:40 compute-0 nova_compute[189485]: 2025-11-29 16:10:40.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:40 compute-0 nova_compute[189485]: 2025-11-29 16:10:40.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:40 compute-0 nova_compute[189485]: 2025-11-29 16:10:40.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:41 compute-0 nova_compute[189485]: 2025-11-29 16:10:41.344 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:43 compute-0 nova_compute[189485]: 2025-11-29 16:10:43.308 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:44 compute-0 nova_compute[189485]: 2025-11-29 16:10:44.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:10:44 compute-0 nova_compute[189485]: 2025-11-29 16:10:44.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:10:46 compute-0 nova_compute[189485]: 2025-11-29 16:10:46.348 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:48 compute-0 nova_compute[189485]: 2025-11-29 16:10:48.313 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:51 compute-0 nova_compute[189485]: 2025-11-29 16:10:51.352 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:52 compute-0 podman[264020]: 2025-11-29 16:10:52.671239851 +0000 UTC m=+0.115217738 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:10:53 compute-0 nova_compute[189485]: 2025-11-29 16:10:53.317 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:56 compute-0 nova_compute[189485]: 2025-11-29 16:10:56.355 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:58 compute-0 nova_compute[189485]: 2025-11-29 16:10:58.321 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:10:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:10:59.235 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:10:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:10:59.236 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:10:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:10:59.236 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:10:59 compute-0 podman[203677]: time="2025-11-29T16:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:10:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:10:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4340 "" "Go-http-client/1.1"
Nov 29 16:11:01 compute-0 nova_compute[189485]: 2025-11-29 16:11:01.358 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: ERROR   16:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: ERROR   16:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: ERROR   16:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: ERROR   16:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:11:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:11:03 compute-0 nova_compute[189485]: 2025-11-29 16:11:03.325 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:06 compute-0 nova_compute[189485]: 2025-11-29 16:11:06.361 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:06 compute-0 podman[264059]: 2025-11-29 16:11:06.693271946 +0000 UTC m=+0.088995672 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Nov 29 16:11:06 compute-0 podman[264046]: 2025-11-29 16:11:06.693913604 +0000 UTC m=+0.115963378 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:11:06 compute-0 podman[264045]: 2025-11-29 16:11:06.697546331 +0000 UTC m=+0.125362080 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Nov 29 16:11:06 compute-0 podman[264043]: 2025-11-29 16:11:06.72876279 +0000 UTC m=+0.156520117 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release-0.7.12=, architecture=x86_64, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Nov 29 16:11:06 compute-0 podman[264044]: 2025-11-29 16:11:06.732703455 +0000 UTC m=+0.152176189 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 16:11:06 compute-0 podman[264047]: 2025-11-29 16:11:06.764302475 +0000 UTC m=+0.166357062 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 16:11:08 compute-0 nova_compute[189485]: 2025-11-29 16:11:08.329 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:10 compute-0 podman[264155]: 2025-11-29 16:11:10.684217456 +0000 UTC m=+0.121741733 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Nov 29 16:11:10 compute-0 podman[264154]: 2025-11-29 16:11:10.70187089 +0000 UTC m=+0.140307662 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 16:11:11 compute-0 nova_compute[189485]: 2025-11-29 16:11:11.366 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:13 compute-0 nova_compute[189485]: 2025-11-29 16:11:13.333 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:16 compute-0 nova_compute[189485]: 2025-11-29 16:11:16.371 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:18 compute-0 nova_compute[189485]: 2025-11-29 16:11:18.337 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:21 compute-0 nova_compute[189485]: 2025-11-29 16:11:21.375 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:23 compute-0 nova_compute[189485]: 2025-11-29 16:11:23.342 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:23 compute-0 podman[264195]: 2025-11-29 16:11:23.713837722 +0000 UTC m=+0.145499001 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:11:26 compute-0 nova_compute[189485]: 2025-11-29 16:11:26.379 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:28 compute-0 nova_compute[189485]: 2025-11-29 16:11:28.346 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:29 compute-0 podman[203677]: time="2025-11-29T16:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:11:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:11:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Nov 29 16:11:31 compute-0 nova_compute[189485]: 2025-11-29 16:11:31.381 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: ERROR   16:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: ERROR   16:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: ERROR   16:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: ERROR   16:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:11:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:11:33 compute-0 nova_compute[189485]: 2025-11-29 16:11:33.352 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:33 compute-0 nova_compute[189485]: 2025-11-29 16:11:33.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:33 compute-0 nova_compute[189485]: 2025-11-29 16:11:33.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:11:33 compute-0 nova_compute[189485]: 2025-11-29 16:11:33.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:11:33 compute-0 nova_compute[189485]: 2025-11-29 16:11:33.500 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:11:36 compute-0 nova_compute[189485]: 2025-11-29 16:11:36.385 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:36 compute-0 nova_compute[189485]: 2025-11-29 16:11:36.495 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:37 compute-0 podman[264221]: 2025-11-29 16:11:37.706998961 +0000 UTC m=+0.118848695 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 16:11:37 compute-0 podman[264233]: 2025-11-29 16:11:37.727680686 +0000 UTC m=+0.127380074 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, build-date=2025-08-20T13:12:41, config_id=edpm)
Nov 29 16:11:37 compute-0 podman[264219]: 2025-11-29 16:11:37.732403324 +0000 UTC m=+0.167781620 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, container_name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4, config_id=edpm, release=1214.1726694543, release-0.7.12=, distribution-scope=public, io.buildah.version=1.29.0)
Nov 29 16:11:37 compute-0 podman[264222]: 2025-11-29 16:11:37.736219427 +0000 UTC m=+0.156023315 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 16:11:37 compute-0 podman[264220]: 2025-11-29 16:11:37.741108737 +0000 UTC m=+0.165635271 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125)
Nov 29 16:11:37 compute-0 podman[264227]: 2025-11-29 16:11:37.756150542 +0000 UTC m=+0.161412979 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.356 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.528 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.965 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.966 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5336MB free_disk=72.30622100830078GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.966 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:11:38 compute-0 nova_compute[189485]: 2025-11-29 16:11:38.967 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:11:39 compute-0 nova_compute[189485]: 2025-11-29 16:11:39.298 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:11:39 compute-0 nova_compute[189485]: 2025-11-29 16:11:39.298 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:11:39 compute-0 nova_compute[189485]: 2025-11-29 16:11:39.344 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:11:39 compute-0 nova_compute[189485]: 2025-11-29 16:11:39.366 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:11:39 compute-0 nova_compute[189485]: 2025-11-29 16:11:39.368 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:11:39 compute-0 nova_compute[189485]: 2025-11-29 16:11:39.369 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.402s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:11:40 compute-0 nova_compute[189485]: 2025-11-29 16:11:40.371 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:40 compute-0 nova_compute[189485]: 2025-11-29 16:11:40.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:41 compute-0 nova_compute[189485]: 2025-11-29 16:11:41.386 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:41 compute-0 nova_compute[189485]: 2025-11-29 16:11:41.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:41 compute-0 nova_compute[189485]: 2025-11-29 16:11:41.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:41 compute-0 podman[264331]: 2025-11-29 16:11:41.669382963 +0000 UTC m=+0.101259823 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 16:11:41 compute-0 podman[264330]: 2025-11-29 16:11:41.679213207 +0000 UTC m=+0.116432601 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3)
Nov 29 16:11:42 compute-0 nova_compute[189485]: 2025-11-29 16:11:42.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:43 compute-0 nova_compute[189485]: 2025-11-29 16:11:43.359 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:46 compute-0 nova_compute[189485]: 2025-11-29 16:11:46.389 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:46 compute-0 nova_compute[189485]: 2025-11-29 16:11:46.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:46 compute-0 nova_compute[189485]: 2025-11-29 16:11:46.483 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:11:48 compute-0 nova_compute[189485]: 2025-11-29 16:11:48.362 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:49 compute-0 nova_compute[189485]: 2025-11-29 16:11:49.517 189489 DEBUG oslo_concurrency.processutils [None req-ec2001b8-cd48-4ce5-a766-a383ea577269 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] Running cmd (subprocess): env LANG=C uptime execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Nov 29 16:11:49 compute-0 nova_compute[189485]: 2025-11-29 16:11:49.554 189489 DEBUG oslo_concurrency.processutils [None req-ec2001b8-cd48-4ce5-a766-a383ea577269 5cbf094e2197487fbe16a0fe6e3076ba 04d676205d9142d19f3d4ce7389f72a2 - - default default] CMD "env LANG=C uptime" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Nov 29 16:11:51 compute-0 nova_compute[189485]: 2025-11-29 16:11:51.392 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:53 compute-0 nova_compute[189485]: 2025-11-29 16:11:53.366 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:54 compute-0 podman[264373]: 2025-11-29 16:11:54.670382199 +0000 UTC m=+0.110372298 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:11:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:11:56.246 106713 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'ba:7f:b3', 'max_tunid': '16711680', 'northd_internal_version': '24.03.7-20.33.0-76.8', 'svc_monitor_mac': 'ca:95:82:a7:f5:05'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Nov 29 16:11:56 compute-0 nova_compute[189485]: 2025-11-29 16:11:56.247 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:56 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:11:56.248 106713 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Nov 29 16:11:56 compute-0 nova_compute[189485]: 2025-11-29 16:11:56.395 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:58 compute-0 nova_compute[189485]: 2025-11-29 16:11:58.371 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:11:58 compute-0 nova_compute[189485]: 2025-11-29 16:11:58.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:11:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:11:59.236 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:11:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:11:59.237 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:11:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:11:59.237 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:11:59 compute-0 podman[203677]: time="2025-11-29T16:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:11:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:11:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4337 "" "Go-http-client/1.1"
Nov 29 16:12:00 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:12:00.250 106713 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=3cd9fbbe-000b-4bc6-a20b-a0658be5fe0a, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.069 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.070 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.070 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.081 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.089 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.090 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.090 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.091 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.094 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.095 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.096 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:12:01.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:12:01 compute-0 nova_compute[189485]: 2025-11-29 16:12:01.398 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: ERROR   16:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: ERROR   16:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: ERROR   16:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: ERROR   16:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:12:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:12:03 compute-0 nova_compute[189485]: 2025-11-29 16:12:03.375 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:06 compute-0 nova_compute[189485]: 2025-11-29 16:12:06.401 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:08 compute-0 nova_compute[189485]: 2025-11-29 16:12:08.381 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:08 compute-0 podman[264398]: 2025-11-29 16:12:08.677930468 +0000 UTC m=+0.102403843 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 16:12:08 compute-0 podman[264399]: 2025-11-29 16:12:08.710771841 +0000 UTC m=+0.128294570 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Nov 29 16:12:08 compute-0 podman[264406]: 2025-11-29 16:12:08.71333187 +0000 UTC m=+0.121276031 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, architecture=x86_64, vendor=Red Hat, Inc., name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Nov 29 16:12:08 compute-0 podman[264396]: 2025-11-29 16:12:08.721799307 +0000 UTC m=+0.157377081 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, version=9.4, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_id=edpm)
Nov 29 16:12:08 compute-0 podman[264397]: 2025-11-29 16:12:08.721780277 +0000 UTC m=+0.140477017 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 16:12:08 compute-0 podman[264405]: 2025-11-29 16:12:08.722709721 +0000 UTC m=+0.135741659 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Nov 29 16:12:11 compute-0 nova_compute[189485]: 2025-11-29 16:12:11.404 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:12 compute-0 podman[264510]: 2025-11-29 16:12:12.653143555 +0000 UTC m=+0.095710204 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Nov 29 16:12:12 compute-0 podman[264509]: 2025-11-29 16:12:12.700841237 +0000 UTC m=+0.139617753 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Nov 29 16:12:13 compute-0 nova_compute[189485]: 2025-11-29 16:12:13.384 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:16 compute-0 nova_compute[189485]: 2025-11-29 16:12:16.408 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:18 compute-0 nova_compute[189485]: 2025-11-29 16:12:18.388 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:21 compute-0 nova_compute[189485]: 2025-11-29 16:12:21.411 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:23 compute-0 nova_compute[189485]: 2025-11-29 16:12:23.391 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:25 compute-0 podman[264551]: 2025-11-29 16:12:25.647715089 +0000 UTC m=+0.083763871 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Nov 29 16:12:26 compute-0 nova_compute[189485]: 2025-11-29 16:12:26.414 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:28 compute-0 nova_compute[189485]: 2025-11-29 16:12:28.395 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:29 compute-0 podman[203677]: time="2025-11-29T16:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:12:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:12:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 29 16:12:31 compute-0 nova_compute[189485]: 2025-11-29 16:12:31.416 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: ERROR   16:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: ERROR   16:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: ERROR   16:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: ERROR   16:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:12:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:12:33 compute-0 nova_compute[189485]: 2025-11-29 16:12:33.399 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:34 compute-0 nova_compute[189485]: 2025-11-29 16:12:34.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:34 compute-0 nova_compute[189485]: 2025-11-29 16:12:34.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:12:34 compute-0 nova_compute[189485]: 2025-11-29 16:12:34.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:12:34 compute-0 nova_compute[189485]: 2025-11-29 16:12:34.508 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:12:36 compute-0 nova_compute[189485]: 2025-11-29 16:12:36.419 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:38 compute-0 nova_compute[189485]: 2025-11-29 16:12:38.402 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:38 compute-0 nova_compute[189485]: 2025-11-29 16:12:38.503 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.532 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.533 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.533 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.534 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:12:39 compute-0 podman[264576]: 2025-11-29 16:12:39.687966094 +0000 UTC m=+0.124669561 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., name=ubi9, version=9.4, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., architecture=x86_64)
Nov 29 16:12:39 compute-0 podman[264578]: 2025-11-29 16:12:39.712887314 +0000 UTC m=+0.120791657 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:12:39 compute-0 podman[264584]: 2025-11-29 16:12:39.71644158 +0000 UTC m=+0.125291469 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4)
Nov 29 16:12:39 compute-0 podman[264577]: 2025-11-29 16:12:39.724827345 +0000 UTC m=+0.146553279 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS)
Nov 29 16:12:39 compute-0 podman[264585]: 2025-11-29 16:12:39.739569761 +0000 UTC m=+0.134850725 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 16:12:39 compute-0 podman[264592]: 2025-11-29 16:12:39.74363538 +0000 UTC m=+0.140130016 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, name=ubi9-minimal, container_name=openstack_network_exporter, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, release=1755695350, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm)
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.921 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.923 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5331MB free_disk=72.30624389648438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.924 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:12:39 compute-0 nova_compute[189485]: 2025-11-29 16:12:39.924 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.134 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.134 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.246 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing inventories for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.368 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating ProviderTree inventory for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.369 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Updating inventory in ProviderTree for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.388 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing aggregate associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.409 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Refreshing trait associations for resource provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd, traits: HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_IMAGE_TYPE_ISO,HW_CPU_X86_SSE,HW_CPU_X86_SSE4A,COMPUTE_ACCELERATORS,HW_CPU_X86_AVX,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_MMX,HW_CPU_X86_ABM,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_RTL8139,HW_CPU_X86_AVX2,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_GRAPHICS_MODEL_BOCHS,HW_CPU_X86_SSE41,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_DEVICE_TAGGING,COMPUTE_IMAGE_TYPE_AKI,HW_CPU_X86_AESNI,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_SSE42,COMPUTE_STORAGE_BUS_IDE,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_VOLUME_MULTI_ATTACH,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_INTEL,COMPUTE_TRUSTED_CERTS,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_IMAGE_TYPE_ARI,HW_CPU_X86_SSSE3,HW_CPU_X86_AMD_SVM,COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_SECURITY_TPM_2_0,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_STORAGE_BUS_VIRTIO,HW_CPU_X86_BMI,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SVM,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_BMI2,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_RESCUE_BFV,COMPUTE_NODE,HW_CPU_X86_SSE2,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_STORAGE_BUS_FDC,COMPUTE_VOLUME_EXTEND,COMPUTE_GRAPHICS_MODEL_VGA _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.442 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.491 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.494 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.495 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.571s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.496 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.497 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Nov 29 16:12:40 compute-0 nova_compute[189485]: 2025-11-29 16:12:40.513 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Nov 29 16:12:41 compute-0 nova_compute[189485]: 2025-11-29 16:12:41.423 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:41 compute-0 nova_compute[189485]: 2025-11-29 16:12:41.514 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:41 compute-0 nova_compute[189485]: 2025-11-29 16:12:41.514 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:41 compute-0 nova_compute[189485]: 2025-11-29 16:12:41.514 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:42 compute-0 nova_compute[189485]: 2025-11-29 16:12:42.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:43 compute-0 nova_compute[189485]: 2025-11-29 16:12:43.405 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:43 compute-0 podman[264689]: 2025-11-29 16:12:43.656285434 +0000 UTC m=+0.093653187 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:12:43 compute-0 podman[264688]: 2025-11-29 16:12:43.690782582 +0000 UTC m=+0.135676858 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Nov 29 16:12:44 compute-0 nova_compute[189485]: 2025-11-29 16:12:44.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:46 compute-0 nova_compute[189485]: 2025-11-29 16:12:46.425 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:47 compute-0 nova_compute[189485]: 2025-11-29 16:12:47.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:47 compute-0 nova_compute[189485]: 2025-11-29 16:12:47.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:12:48 compute-0 nova_compute[189485]: 2025-11-29 16:12:48.408 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:51 compute-0 nova_compute[189485]: 2025-11-29 16:12:51.428 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:53 compute-0 nova_compute[189485]: 2025-11-29 16:12:53.413 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:56 compute-0 nova_compute[189485]: 2025-11-29 16:12:56.431 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:56 compute-0 podman[264732]: 2025-11-29 16:12:56.655746619 +0000 UTC m=+0.104110830 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:12:58 compute-0 nova_compute[189485]: 2025-11-29 16:12:58.415 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:12:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:12:59.237 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:12:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:12:59.238 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:12:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:12:59.238 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:12:59 compute-0 nova_compute[189485]: 2025-11-29 16:12:59.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:12:59 compute-0 podman[203677]: time="2025-11-29T16:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:12:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:12:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: ERROR   16:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: ERROR   16:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: ERROR   16:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: ERROR   16:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:13:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:13:01 compute-0 nova_compute[189485]: 2025-11-29 16:13:01.433 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:03 compute-0 nova_compute[189485]: 2025-11-29 16:13:03.419 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:06 compute-0 nova_compute[189485]: 2025-11-29 16:13:06.435 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:08 compute-0 nova_compute[189485]: 2025-11-29 16:13:08.423 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:10 compute-0 podman[264758]: 2025-11-29 16:13:10.651162697 +0000 UTC m=+0.089525296 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Nov 29 16:13:10 compute-0 podman[264757]: 2025-11-29 16:13:10.666415607 +0000 UTC m=+0.098590380 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 16:13:10 compute-0 podman[264755]: 2025-11-29 16:13:10.687328109 +0000 UTC m=+0.124761503 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, architecture=x86_64, release-0.7.12=, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 16:13:10 compute-0 podman[264769]: 2025-11-29 16:13:10.688558313 +0000 UTC m=+0.114165049 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, container_name=openstack_network_exporter, io.buildah.version=1.33.7, vcs-type=git, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, config_id=edpm, version=9.6, io.openshift.expose-services=, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc.)
Nov 29 16:13:10 compute-0 podman[264756]: 2025-11-29 16:13:10.699037975 +0000 UTC m=+0.144527156 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 16:13:10 compute-0 podman[264764]: 2025-11-29 16:13:10.711471369 +0000 UTC m=+0.142908192 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Nov 29 16:13:11 compute-0 nova_compute[189485]: 2025-11-29 16:13:11.438 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:13 compute-0 nova_compute[189485]: 2025-11-29 16:13:13.426 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:14 compute-0 nova_compute[189485]: 2025-11-29 16:13:14.499 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:14 compute-0 nova_compute[189485]: 2025-11-29 16:13:14.500 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Nov 29 16:13:14 compute-0 podman[264867]: 2025-11-29 16:13:14.699238543 +0000 UTC m=+0.133980312 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:13:14 compute-0 podman[264866]: 2025-11-29 16:13:14.710046893 +0000 UTC m=+0.150305890 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Nov 29 16:13:16 compute-0 nova_compute[189485]: 2025-11-29 16:13:16.442 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:18 compute-0 nova_compute[189485]: 2025-11-29 16:13:18.431 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:21 compute-0 nova_compute[189485]: 2025-11-29 16:13:21.446 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:23 compute-0 nova_compute[189485]: 2025-11-29 16:13:23.435 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:26 compute-0 nova_compute[189485]: 2025-11-29 16:13:26.448 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:27 compute-0 podman[264910]: 2025-11-29 16:13:27.653273015 +0000 UTC m=+0.090528984 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:13:28 compute-0 nova_compute[189485]: 2025-11-29 16:13:28.438 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:29 compute-0 podman[203677]: time="2025-11-29T16:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:13:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:13:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: ERROR   16:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: ERROR   16:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: ERROR   16:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: ERROR   16:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:13:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:13:31 compute-0 nova_compute[189485]: 2025-11-29 16:13:31.451 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:33 compute-0 nova_compute[189485]: 2025-11-29 16:13:33.443 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:35 compute-0 nova_compute[189485]: 2025-11-29 16:13:35.501 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:35 compute-0 nova_compute[189485]: 2025-11-29 16:13:35.502 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:13:35 compute-0 nova_compute[189485]: 2025-11-29 16:13:35.502 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:13:35 compute-0 nova_compute[189485]: 2025-11-29 16:13:35.532 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:13:36 compute-0 nova_compute[189485]: 2025-11-29 16:13:36.454 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:38 compute-0 nova_compute[189485]: 2025-11-29 16:13:38.447 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:39 compute-0 nova_compute[189485]: 2025-11-29 16:13:39.508 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.457 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.540 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.541 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.542 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.543 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:13:41 compute-0 podman[264935]: 2025-11-29 16:13:41.699351065 +0000 UTC m=+0.122981885 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=ovn_metadata_agent)
Nov 29 16:13:41 compute-0 podman[264942]: 2025-11-29 16:13:41.700546607 +0000 UTC m=+0.108772113 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d)
Nov 29 16:13:41 compute-0 podman[264934]: 2025-11-29 16:13:41.718955432 +0000 UTC m=+0.165279143 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., release-0.7.12=, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, version=9.4, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Nov 29 16:13:41 compute-0 podman[264939]: 2025-11-29 16:13:41.733208926 +0000 UTC m=+0.153990840 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Nov 29 16:13:41 compute-0 podman[264948]: 2025-11-29 16:13:41.73338103 +0000 UTC m=+0.134439094 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers)
Nov 29 16:13:41 compute-0 podman[264943]: 2025-11-29 16:13:41.784678958 +0000 UTC m=+0.185367692 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_controller)
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.912 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.913 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5334MB free_disk=72.30624389648438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.913 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:13:41 compute-0 nova_compute[189485]: 2025-11-29 16:13:41.913 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:13:42 compute-0 nova_compute[189485]: 2025-11-29 16:13:42.019 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:13:42 compute-0 nova_compute[189485]: 2025-11-29 16:13:42.020 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:13:42 compute-0 nova_compute[189485]: 2025-11-29 16:13:42.221 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:13:42 compute-0 nova_compute[189485]: 2025-11-29 16:13:42.241 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:13:42 compute-0 nova_compute[189485]: 2025-11-29 16:13:42.243 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:13:42 compute-0 nova_compute[189485]: 2025-11-29 16:13:42.244 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.330s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:13:43 compute-0 nova_compute[189485]: 2025-11-29 16:13:43.244 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:43 compute-0 nova_compute[189485]: 2025-11-29 16:13:43.449 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:43 compute-0 nova_compute[189485]: 2025-11-29 16:13:43.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:45 compute-0 podman[265048]: 2025-11-29 16:13:45.653276539 +0000 UTC m=+0.098426825 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Nov 29 16:13:45 compute-0 podman[265049]: 2025-11-29 16:13:45.680387239 +0000 UTC m=+0.120765956 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:13:46 compute-0 nova_compute[189485]: 2025-11-29 16:13:46.460 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:46 compute-0 nova_compute[189485]: 2025-11-29 16:13:46.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:48 compute-0 nova_compute[189485]: 2025-11-29 16:13:48.453 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:48 compute-0 nova_compute[189485]: 2025-11-29 16:13:48.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:13:48 compute-0 nova_compute[189485]: 2025-11-29 16:13:48.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:13:51 compute-0 nova_compute[189485]: 2025-11-29 16:13:51.463 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:53 compute-0 nova_compute[189485]: 2025-11-29 16:13:53.456 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:56 compute-0 nova_compute[189485]: 2025-11-29 16:13:56.466 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:58 compute-0 nova_compute[189485]: 2025-11-29 16:13:58.459 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:13:58 compute-0 podman[265090]: 2025-11-29 16:13:58.639230313 +0000 UTC m=+0.088156890 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:13:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:13:59.239 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:13:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:13:59.239 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:13:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:13:59.240 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:13:59 compute-0 podman[203677]: time="2025-11-29T16:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:13:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:13:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.070 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.070 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.073 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.074 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.075 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.078 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.079 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.079 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.083 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.084 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.085 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.086 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.087 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.088 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.089 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.090 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.091 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc16f20170>] with cache [{}], pollster history [{'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'memory.usage': [], 'network.incoming.bytes': [], 'network.outgoing.bytes.rate': [], 'network.incoming.bytes.delta': [], 'network.outgoing.packets': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.capacity': [], 'cpu': [], 'disk.device.read.latency': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.092 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.092 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.093 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.094 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.094 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.095 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.096 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.097 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.098 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.099 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.100 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.103 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.104 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.105 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.106 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.107 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.108 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.109 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:14:01.110 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: ERROR   16:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: ERROR   16:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: ERROR   16:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: ERROR   16:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:14:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:14:01 compute-0 nova_compute[189485]: 2025-11-29 16:14:01.468 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:01 compute-0 nova_compute[189485]: 2025-11-29 16:14:01.480 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:03 compute-0 nova_compute[189485]: 2025-11-29 16:14:03.464 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:06 compute-0 nova_compute[189485]: 2025-11-29 16:14:06.470 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:08 compute-0 nova_compute[189485]: 2025-11-29 16:14:08.467 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:11 compute-0 nova_compute[189485]: 2025-11-29 16:14:11.473 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:12 compute-0 podman[265116]: 2025-11-29 16:14:12.670096039 +0000 UTC m=+0.115496664 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.openshift.tags=base rhel9, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release-0.7.12=, config_id=edpm, io.buildah.version=1.29.0)
Nov 29 16:14:12 compute-0 podman[265117]: 2025-11-29 16:14:12.690369434 +0000 UTC m=+0.122926914 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, config_id=ovn_metadata_agent, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:14:12 compute-0 podman[265131]: 2025-11-29 16:14:12.692587883 +0000 UTC m=+0.090352808 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, architecture=x86_64, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, config_id=edpm, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Nov 29 16:14:12 compute-0 podman[265118]: 2025-11-29 16:14:12.701799691 +0000 UTC m=+0.123848539 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:14:12 compute-0 podman[265126]: 2025-11-29 16:14:12.709414326 +0000 UTC m=+0.126575922 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 16:14:12 compute-0 podman[265124]: 2025-11-29 16:14:12.721735287 +0000 UTC m=+0.137006023 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Nov 29 16:14:13 compute-0 nova_compute[189485]: 2025-11-29 16:14:13.470 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:16 compute-0 nova_compute[189485]: 2025-11-29 16:14:16.477 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:16 compute-0 podman[265234]: 2025-11-29 16:14:16.649254082 +0000 UTC m=+0.084114292 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 16:14:16 compute-0 podman[265233]: 2025-11-29 16:14:16.7061169 +0000 UTC m=+0.146196060 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:14:18 compute-0 nova_compute[189485]: 2025-11-29 16:14:18.474 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:21 compute-0 nova_compute[189485]: 2025-11-29 16:14:21.482 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:23 compute-0 nova_compute[189485]: 2025-11-29 16:14:23.480 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:26 compute-0 nova_compute[189485]: 2025-11-29 16:14:26.485 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:28 compute-0 nova_compute[189485]: 2025-11-29 16:14:28.483 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:29 compute-0 podman[265278]: 2025-11-29 16:14:29.655386346 +0000 UTC m=+0.102162417 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:14:29 compute-0 podman[203677]: time="2025-11-29T16:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:14:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:14:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: ERROR   16:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: ERROR   16:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: ERROR   16:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: ERROR   16:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:14:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:14:31 compute-0 nova_compute[189485]: 2025-11-29 16:14:31.489 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:33 compute-0 nova_compute[189485]: 2025-11-29 16:14:33.487 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:36 compute-0 nova_compute[189485]: 2025-11-29 16:14:36.492 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:37 compute-0 nova_compute[189485]: 2025-11-29 16:14:37.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:37 compute-0 nova_compute[189485]: 2025-11-29 16:14:37.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:14:37 compute-0 nova_compute[189485]: 2025-11-29 16:14:37.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:14:37 compute-0 nova_compute[189485]: 2025-11-29 16:14:37.580 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:14:38 compute-0 nova_compute[189485]: 2025-11-29 16:14:38.489 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:41 compute-0 nova_compute[189485]: 2025-11-29 16:14:41.496 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:41 compute-0 nova_compute[189485]: 2025-11-29 16:14:41.575 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:42 compute-0 nova_compute[189485]: 2025-11-29 16:14:42.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:42 compute-0 nova_compute[189485]: 2025-11-29 16:14:42.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:42 compute-0 nova_compute[189485]: 2025-11-29 16:14:42.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.044 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.044 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.045 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.045 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.406 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.407 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5342MB free_disk=72.30624389648438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.408 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.408 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.491 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:43 compute-0 podman[265302]: 2025-11-29 16:14:43.637859618 +0000 UTC m=+0.089458006 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.663 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.663 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:14:43 compute-0 podman[265309]: 2025-11-29 16:14:43.66734311 +0000 UTC m=+0.100527903 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, io.buildah.version=1.33.7, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, io.openshift.expose-services=, release=1755695350)
Nov 29 16:14:43 compute-0 podman[265304]: 2025-11-29 16:14:43.66923844 +0000 UTC m=+0.113209564 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, io.buildah.version=1.41.4)
Nov 29 16:14:43 compute-0 podman[265301]: 2025-11-29 16:14:43.670798182 +0000 UTC m=+0.114232110 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, com.redhat.component=ubi9-container, version=9.4, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543)
Nov 29 16:14:43 compute-0 podman[265303]: 2025-11-29 16:14:43.672797437 +0000 UTC m=+0.110923863 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:14:43 compute-0 podman[265305]: 2025-11-29 16:14:43.687178593 +0000 UTC m=+0.128987857 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.725 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.742 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.743 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:14:43 compute-0 nova_compute[189485]: 2025-11-29 16:14:43.744 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.336s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:14:44 compute-0 nova_compute[189485]: 2025-11-29 16:14:44.743 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:44 compute-0 nova_compute[189485]: 2025-11-29 16:14:44.743 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:46 compute-0 nova_compute[189485]: 2025-11-29 16:14:46.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:46 compute-0 nova_compute[189485]: 2025-11-29 16:14:46.497 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:47 compute-0 podman[265418]: 2025-11-29 16:14:47.681447631 +0000 UTC m=+0.109593756 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Nov 29 16:14:47 compute-0 podman[265417]: 2025-11-29 16:14:47.701329276 +0000 UTC m=+0.134805544 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_id=multipathd, container_name=multipathd)
Nov 29 16:14:48 compute-0 nova_compute[189485]: 2025-11-29 16:14:48.495 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:49 compute-0 nova_compute[189485]: 2025-11-29 16:14:49.483 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:14:49 compute-0 nova_compute[189485]: 2025-11-29 16:14:49.484 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:14:51 compute-0 nova_compute[189485]: 2025-11-29 16:14:51.500 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:53 compute-0 nova_compute[189485]: 2025-11-29 16:14:53.498 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:56 compute-0 nova_compute[189485]: 2025-11-29 16:14:56.503 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:58 compute-0 nova_compute[189485]: 2025-11-29 16:14:58.501 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:14:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:14:59.241 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:14:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:14:59.242 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:14:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:14:59.242 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:14:59 compute-0 podman[203677]: time="2025-11-29T16:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:14:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:14:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Nov 29 16:15:00 compute-0 podman[265459]: 2025-11-29 16:15:00.625709483 +0000 UTC m=+0.077362221 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: ERROR   16:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: ERROR   16:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: ERROR   16:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: ERROR   16:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:15:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:15:01 compute-0 nova_compute[189485]: 2025-11-29 16:15:01.505 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:03 compute-0 nova_compute[189485]: 2025-11-29 16:15:03.506 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:06 compute-0 nova_compute[189485]: 2025-11-29 16:15:06.507 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:08 compute-0 nova_compute[189485]: 2025-11-29 16:15:08.508 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:11 compute-0 nova_compute[189485]: 2025-11-29 16:15:11.510 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:13 compute-0 nova_compute[189485]: 2025-11-29 16:15:13.511 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:14 compute-0 podman[265485]: 2025-11-29 16:15:14.699733753 +0000 UTC m=+0.124946069 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Nov 29 16:15:14 compute-0 podman[265499]: 2025-11-29 16:15:14.707194924 +0000 UTC m=+0.109853114 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, architecture=x86_64, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, config_id=edpm, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Nov 29 16:15:14 compute-0 podman[265484]: 2025-11-29 16:15:14.712465196 +0000 UTC m=+0.152711656 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 16:15:14 compute-0 podman[265483]: 2025-11-29 16:15:14.72118715 +0000 UTC m=+0.156998410 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 29 16:15:14 compute-0 podman[265496]: 2025-11-29 16:15:14.743542001 +0000 UTC m=+0.152087609 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Nov 29 16:15:14 compute-0 podman[265486]: 2025-11-29 16:15:14.754715921 +0000 UTC m=+0.169099895 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:15:16 compute-0 nova_compute[189485]: 2025-11-29 16:15:16.517 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:18 compute-0 nova_compute[189485]: 2025-11-29 16:15:18.514 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:18 compute-0 podman[265604]: 2025-11-29 16:15:18.681204978 +0000 UTC m=+0.107884281 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:15:18 compute-0 podman[265603]: 2025-11-29 16:15:18.688585686 +0000 UTC m=+0.121084815 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:15:21 compute-0 nova_compute[189485]: 2025-11-29 16:15:21.518 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:23 compute-0 nova_compute[189485]: 2025-11-29 16:15:23.517 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:26 compute-0 nova_compute[189485]: 2025-11-29 16:15:26.523 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:28 compute-0 nova_compute[189485]: 2025-11-29 16:15:28.520 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:29 compute-0 podman[203677]: time="2025-11-29T16:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:15:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:15:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: ERROR   16:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: ERROR   16:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: ERROR   16:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: ERROR   16:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:15:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:15:31 compute-0 nova_compute[189485]: 2025-11-29 16:15:31.527 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:31 compute-0 podman[265646]: 2025-11-29 16:15:31.654969893 +0000 UTC m=+0.109622457 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Nov 29 16:15:33 compute-0 nova_compute[189485]: 2025-11-29 16:15:33.523 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:36 compute-0 nova_compute[189485]: 2025-11-29 16:15:36.530 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:38 compute-0 nova_compute[189485]: 2025-11-29 16:15:38.526 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:39 compute-0 nova_compute[189485]: 2025-11-29 16:15:39.485 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:39 compute-0 nova_compute[189485]: 2025-11-29 16:15:39.486 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Nov 29 16:15:39 compute-0 nova_compute[189485]: 2025-11-29 16:15:39.487 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Nov 29 16:15:39 compute-0 nova_compute[189485]: 2025-11-29 16:15:39.512 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Nov 29 16:15:41 compute-0 nova_compute[189485]: 2025-11-29 16:15:41.506 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:41 compute-0 nova_compute[189485]: 2025-11-29 16:15:41.533 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:42 compute-0 nova_compute[189485]: 2025-11-29 16:15:42.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:42 compute-0 nova_compute[189485]: 2025-11-29 16:15:42.526 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:15:42 compute-0 nova_compute[189485]: 2025-11-29 16:15:42.527 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:15:42 compute-0 nova_compute[189485]: 2025-11-29 16:15:42.528 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:15:42 compute-0 nova_compute[189485]: 2025-11-29 16:15:42.529 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.016 189489 WARNING nova.virt.libvirt.driver [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.017 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5354MB free_disk=72.30624389648438GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.017 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.017 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.529 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.874 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Nov 29 16:15:43 compute-0 nova_compute[189485]: 2025-11-29 16:15:43.875 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Nov 29 16:15:44 compute-0 nova_compute[189485]: 2025-11-29 16:15:44.243 189489 DEBUG nova.compute.provider_tree [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed in ProviderTree for provider: 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Nov 29 16:15:44 compute-0 nova_compute[189485]: 2025-11-29 16:15:44.265 189489 DEBUG nova.scheduler.client.report [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Inventory has not changed for provider 4d7b41cb-fd09-4d7d-96d2-9e9db6a799bd based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Nov 29 16:15:44 compute-0 nova_compute[189485]: 2025-11-29 16:15:44.268 189489 DEBUG nova.compute.resource_tracker [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Nov 29 16:15:44 compute-0 nova_compute[189485]: 2025-11-29 16:15:44.269 189489 DEBUG oslo_concurrency.lockutils [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.251s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:15:45 compute-0 nova_compute[189485]: 2025-11-29 16:15:45.269 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:45 compute-0 nova_compute[189485]: 2025-11-29 16:15:45.270 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:45 compute-0 nova_compute[189485]: 2025-11-29 16:15:45.270 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:45 compute-0 nova_compute[189485]: 2025-11-29 16:15:45.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:45 compute-0 podman[265670]: 2025-11-29 16:15:45.679423613 +0000 UTC m=+0.107126961 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Nov 29 16:15:45 compute-0 podman[265669]: 2025-11-29 16:15:45.680509121 +0000 UTC m=+0.117129058 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Nov 29 16:15:45 compute-0 podman[265671]: 2025-11-29 16:15:45.704482115 +0000 UTC m=+0.122630646 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Nov 29 16:15:45 compute-0 podman[265679]: 2025-11-29 16:15:45.704862186 +0000 UTC m=+0.123473939 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., config_id=edpm, io.buildah.version=1.33.7, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public)
Nov 29 16:15:45 compute-0 podman[265672]: 2025-11-29 16:15:45.722833479 +0000 UTC m=+0.155679815 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Nov 29 16:15:45 compute-0 podman[265673]: 2025-11-29 16:15:45.7615755 +0000 UTC m=+0.172786124 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true)
Nov 29 16:15:46 compute-0 nova_compute[189485]: 2025-11-29 16:15:46.482 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:46 compute-0 nova_compute[189485]: 2025-11-29 16:15:46.536 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:48 compute-0 nova_compute[189485]: 2025-11-29 16:15:48.532 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:49 compute-0 podman[265783]: 2025-11-29 16:15:49.685265251 +0000 UTC m=+0.118723761 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Nov 29 16:15:49 compute-0 podman[265782]: 2025-11-29 16:15:49.704720873 +0000 UTC m=+0.144377071 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629)
Nov 29 16:15:51 compute-0 nova_compute[189485]: 2025-11-29 16:15:51.484 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:15:51 compute-0 nova_compute[189485]: 2025-11-29 16:15:51.485 189489 DEBUG nova.compute.manager [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Nov 29 16:15:51 compute-0 nova_compute[189485]: 2025-11-29 16:15:51.539 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:53 compute-0 nova_compute[189485]: 2025-11-29 16:15:53.535 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:56 compute-0 nova_compute[189485]: 2025-11-29 16:15:56.544 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:58 compute-0 nova_compute[189485]: 2025-11-29 16:15:58.539 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:15:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:15:59.241 106713 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Nov 29 16:15:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:15:59.242 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Nov 29 16:15:59 compute-0 ovn_metadata_agent[106708]: 2025-11-29 16:15:59.242 106713 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Nov 29 16:15:59 compute-0 podman[203677]: time="2025-11-29T16:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:15:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:15:59 compute-0 podman[203677]: @ - - [29/Nov/2025:16:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.071 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.071 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.071 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.072 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7fdc1c52ffe0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d80e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f920>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f950>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8170>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.074 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f9b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d8200>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f646270>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.075 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f4473b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f3fcf80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fc50>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.076 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.077 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f5c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c5d85c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f620>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f345640>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.078 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f680>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1f82b6b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52fec0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52f6e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.079 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.080 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7fdc1c52ff80>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7fdc1c3501d0>] with cache [{}], pollster history [{'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.077 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7fdc1c5d80b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.080 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7fdc1c52f8f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7fdc1d66e8d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.081 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7fdc1c5d8140>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.081 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7fdc1c52f980>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7fdc1c5d81d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7fdc1c52f410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.082 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7fdc1c5d82c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7fdc1f3863f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7fdc1c52dac0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7fdc1c52f350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7fdc1c52fe60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7fdc1c52f470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.084 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.084 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7fdc1c52f4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7fdc1c52f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7fdc1c52f590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.085 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.085 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7fdc1c5d8590>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7fdc1c52f5f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7fdc1c5d8260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7fdc1c52f650>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.086 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7fdc1f3d6000>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7fdc1c52fe90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.087 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7fdc1c52f6b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.087 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7fdc1c52fef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.088 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7fdc1c52ff50>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7fdc1d6f3e00>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.088 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.089 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.090 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.091 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.092 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 ceilometer_agent_compute[200190]: 2025-11-29 16:16:01.093 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: ERROR   16:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: ERROR   16:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: ERROR   16:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: ERROR   16:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:16:01 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:16:01 compute-0 nova_compute[189485]: 2025-11-29 16:16:01.546 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:02 compute-0 nova_compute[189485]: 2025-11-29 16:16:02.479 189489 DEBUG oslo_service.periodic_task [None req-a545e679-0a59-48d3-85fc-94368fdcaa15 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Nov 29 16:16:02 compute-0 podman[265826]: 2025-11-29 16:16:02.662209463 +0000 UTC m=+0.104372357 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Nov 29 16:16:03 compute-0 nova_compute[189485]: 2025-11-29 16:16:03.542 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:06 compute-0 nova_compute[189485]: 2025-11-29 16:16:06.549 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:08 compute-0 nova_compute[189485]: 2025-11-29 16:16:08.545 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:11 compute-0 nova_compute[189485]: 2025-11-29 16:16:11.551 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:13 compute-0 nova_compute[189485]: 2025-11-29 16:16:13.548 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:16 compute-0 nova_compute[189485]: 2025-11-29 16:16:16.555 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:16 compute-0 podman[265848]: 2025-11-29 16:16:16.672117352 +0000 UTC m=+0.111884748 container health_status 327a0bf0339f91fb9aa743d83bba9354e17412a219927cf26d78d5f8fd4dd3da (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, release-0.7.12=, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Nov 29 16:16:16 compute-0 podman[265849]: 2025-11-29 16:16:16.679294124 +0000 UTC m=+0.111978640 container health_status 39119cb32f97014547ba2c60f43c00e43425ed1df74faaff249f1635844c25c1 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251125)
Nov 29 16:16:16 compute-0 podman[265864]: 2025-11-29 16:16:16.708979542 +0000 UTC m=+0.108730402 container health_status e5839a970d26df4b5816c034c4b3558e9be82a73291e316b23187d3cbe17acfa (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, release=1755695350, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc.)
Nov 29 16:16:16 compute-0 podman[265856]: 2025-11-29 16:16:16.718496728 +0000 UTC m=+0.127209000 container health_status 83f5368ee71cc0594a54d35654073b62730bd857d9235d40b84fc997fb17e3b1 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_build_tag=2fed71776bb81f75707b655f8aa13a5d, config_id=edpm, container_name=ceilometer_agent_compute)
Nov 29 16:16:16 compute-0 podman[265850]: 2025-11-29 16:16:16.730040818 +0000 UTC m=+0.146543679 container health_status 6528f446ecd8bf77e24d834d528f5138c59ce48f706c57ff53f5080fc7a11ecf (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Nov 29 16:16:16 compute-0 podman[265863]: 2025-11-29 16:16:16.745811762 +0000 UTC m=+0.147682030 container health_status c749feec1c13ebc7feaefc0dea682eec930a953647b67d2de22914f020be865b (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251125, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Nov 29 16:16:18 compute-0 nova_compute[189485]: 2025-11-29 16:16:18.552 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:20 compute-0 podman[265967]: 2025-11-29 16:16:20.679714308 +0000 UTC m=+0.115564537 container health_status 2ac126febe644431f518c0e44532f26ee43bbed67280d579fc4190e9a254ee88 (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=1f5c0439f2433cb462b222a5bb23e629, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd)
Nov 29 16:16:20 compute-0 podman[265968]: 2025-11-29 16:16:20.687299832 +0000 UTC m=+0.120448008 container health_status e8d66bb77c433c032dedb2dde7770f8f63cb6c07e00ef803a1e7a0a451a1ff22 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Nov 29 16:16:21 compute-0 systemd-logind[794]: New session 34 of user zuul.
Nov 29 16:16:21 compute-0 systemd[1]: Started Session 34 of User zuul.
Nov 29 16:16:21 compute-0 nova_compute[189485]: 2025-11-29 16:16:21.560 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:23 compute-0 nova_compute[189485]: 2025-11-29 16:16:23.555 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:26 compute-0 nova_compute[189485]: 2025-11-29 16:16:26.560 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:26 compute-0 ovs-vsctl[266179]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Nov 29 16:16:28 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Nov 29 16:16:28 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Nov 29 16:16:28 compute-0 virtqemud[189062]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Nov 29 16:16:28 compute-0 nova_compute[189485]: 2025-11-29 16:16:28.557 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:29 compute-0 podman[203677]: time="2025-11-29T16:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Nov 29 16:16:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Nov 29 16:16:29 compute-0 podman[203677]: @ - - [29/Nov/2025:16:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4341 "" "Go-http-client/1.1"
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: ERROR   16:16:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: ERROR   16:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: ERROR   16:16:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: ERROR   16:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Nov 29 16:16:31 compute-0 openstack_network_exporter[205841]: 
Nov 29 16:16:31 compute-0 nova_compute[189485]: 2025-11-29 16:16:31.562 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:32 compute-0 systemd[1]: Starting Hostname Service...
Nov 29 16:16:32 compute-0 systemd[1]: Started Hostname Service.
Nov 29 16:16:33 compute-0 nova_compute[189485]: 2025-11-29 16:16:33.560 189489 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Nov 29 16:16:33 compute-0 podman[266772]: 2025-11-29 16:16:33.620899267 +0000 UTC m=+0.097064069 container health_status 55769e20ce8d9ecd15d312245cbda9ddcb14692e6cc5261440b5d7adc5686dc7 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
